00:00:00.001 Started by upstream project "autotest-per-patch" build number 132543 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.038 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.039 The recommended git tool is: git 00:00:00.039 using credential 00000000-0000-0000-0000-000000000002 00:00:00.041 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.060 Fetching changes from the remote Git repository 00:00:00.064 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.088 Using shallow fetch with depth 1 00:00:00.088 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.088 > git --version # timeout=10 00:00:00.134 > git --version # 'git version 2.39.2' 00:00:00.134 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.191 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.191 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.125 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.138 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.150 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.150 > git config core.sparsecheckout # timeout=10 00:00:03.163 > git read-tree -mu HEAD # timeout=10 00:00:03.180 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.207 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.208 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.352 [Pipeline] Start of Pipeline 00:00:03.366 [Pipeline] library 00:00:03.367 Loading library shm_lib@master 00:00:03.368 Library shm_lib@master is cached. Copying from home. 00:00:03.383 [Pipeline] node 00:00:03.390 Running on VM-host-SM16 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.392 [Pipeline] { 00:00:03.402 [Pipeline] catchError 00:00:03.403 [Pipeline] { 00:00:03.417 [Pipeline] wrap 00:00:03.426 [Pipeline] { 00:00:03.435 [Pipeline] stage 00:00:03.437 [Pipeline] { (Prologue) 00:00:03.457 [Pipeline] echo 00:00:03.458 Node: VM-host-SM16 00:00:03.465 [Pipeline] cleanWs 00:00:03.473 [WS-CLEANUP] Deleting project workspace... 00:00:03.473 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.479 [WS-CLEANUP] done 00:00:03.675 [Pipeline] setCustomBuildProperty 00:00:03.762 [Pipeline] httpRequest 00:00:04.219 [Pipeline] echo 00:00:04.220 Sorcerer 10.211.164.20 is alive 00:00:04.229 [Pipeline] retry 00:00:04.231 [Pipeline] { 00:00:04.245 [Pipeline] httpRequest 00:00:04.249 HttpMethod: GET 00:00:04.249 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.250 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.251 Response Code: HTTP/1.1 200 OK 00:00:04.251 Success: Status code 200 is in the accepted range: 200,404 00:00:04.252 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.397 [Pipeline] } 00:00:04.416 [Pipeline] // retry 00:00:04.422 [Pipeline] sh 00:00:04.699 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.711 [Pipeline] httpRequest 00:00:05.124 [Pipeline] echo 00:00:05.126 Sorcerer 10.211.164.20 is alive 00:00:05.133 [Pipeline] retry 00:00:05.134 [Pipeline] { 00:00:05.143 [Pipeline] httpRequest 00:00:05.146 HttpMethod: GET 00:00:05.147 URL: http://10.211.164.20/packages/spdk_971ec01268cdf972b0fd02014ffe2998b80931e9.tar.gz 00:00:05.147 Sending request to url: http://10.211.164.20/packages/spdk_971ec01268cdf972b0fd02014ffe2998b80931e9.tar.gz 00:00:05.148 Response Code: HTTP/1.1 200 OK 00:00:05.149 Success: Status code 200 is in the accepted range: 200,404 00:00:05.149 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_971ec01268cdf972b0fd02014ffe2998b80931e9.tar.gz 00:00:28.664 [Pipeline] } 00:00:28.682 [Pipeline] // retry 00:00:28.690 [Pipeline] sh 00:00:28.969 + tar --no-same-owner -xf spdk_971ec01268cdf972b0fd02014ffe2998b80931e9.tar.gz 00:00:32.311 [Pipeline] sh 00:00:32.589 + git -C spdk log --oneline -n5 00:00:32.590 971ec0126 bdevperf: Add hide_metadata option 00:00:32.590 894d5af2a bdevperf: Get metadata config by not bdev but bdev_desc 00:00:32.590 075fb5b8c bdevperf: Store the result of DIF type check into job structure 00:00:32.590 7cc16c961 bdevperf: g_main_thread calls bdev_open() instead of job->thread 00:00:32.590 3c5c3d590 bdevperf: Remove TAILQ_REMOVE which may result in potential memory leak 00:00:32.608 [Pipeline] writeFile 00:00:32.623 [Pipeline] sh 00:00:32.903 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:32.914 [Pipeline] sh 00:00:33.192 + cat autorun-spdk.conf 00:00:33.192 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.192 SPDK_RUN_ASAN=1 00:00:33.192 SPDK_RUN_UBSAN=1 00:00:33.192 SPDK_TEST_RAID=1 00:00:33.192 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:33.199 RUN_NIGHTLY=0 00:00:33.201 [Pipeline] } 00:00:33.216 [Pipeline] // stage 00:00:33.234 [Pipeline] stage 00:00:33.237 [Pipeline] { (Run VM) 00:00:33.253 [Pipeline] sh 00:00:33.533 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:33.533 + echo 'Start stage prepare_nvme.sh' 00:00:33.533 Start stage prepare_nvme.sh 00:00:33.533 + [[ -n 4 ]] 00:00:33.533 + disk_prefix=ex4 00:00:33.533 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:33.533 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:33.533 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:33.533 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.533 ++ SPDK_RUN_ASAN=1 00:00:33.533 ++ SPDK_RUN_UBSAN=1 00:00:33.533 ++ SPDK_TEST_RAID=1 00:00:33.533 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:33.533 ++ RUN_NIGHTLY=0 00:00:33.533 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:33.533 + nvme_files=() 00:00:33.533 + declare -A nvme_files 00:00:33.533 + backend_dir=/var/lib/libvirt/images/backends 00:00:33.533 + nvme_files['nvme.img']=5G 00:00:33.533 + nvme_files['nvme-cmb.img']=5G 00:00:33.533 + nvme_files['nvme-multi0.img']=4G 00:00:33.533 + nvme_files['nvme-multi1.img']=4G 00:00:33.533 + nvme_files['nvme-multi2.img']=4G 00:00:33.533 + nvme_files['nvme-openstack.img']=8G 00:00:33.533 + nvme_files['nvme-zns.img']=5G 00:00:33.533 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:33.533 + (( SPDK_TEST_FTL == 1 )) 00:00:33.533 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:33.533 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:33.533 + for nvme in "${!nvme_files[@]}" 00:00:33.533 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:00:33.533 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:33.533 + for nvme in "${!nvme_files[@]}" 00:00:33.533 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:00:33.533 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:33.533 + for nvme in "${!nvme_files[@]}" 00:00:33.533 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:00:33.533 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:33.533 + for nvme in "${!nvme_files[@]}" 00:00:33.533 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:00:33.533 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:33.533 + for nvme in "${!nvme_files[@]}" 00:00:33.533 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:00:33.533 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:33.533 + for nvme in "${!nvme_files[@]}" 00:00:33.533 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:00:33.533 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:33.533 + for nvme in "${!nvme_files[@]}" 00:00:33.533 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:00:33.533 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:33.533 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:00:33.533 + echo 'End stage prepare_nvme.sh' 00:00:33.533 End stage prepare_nvme.sh 00:00:33.544 [Pipeline] sh 00:00:33.827 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:33.827 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:00:33.827 00:00:33.827 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:33.827 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:33.827 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:33.827 HELP=0 00:00:33.827 DRY_RUN=0 00:00:33.827 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:00:33.827 NVME_DISKS_TYPE=nvme,nvme, 00:00:33.827 NVME_AUTO_CREATE=0 00:00:33.827 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:00:33.827 NVME_CMB=,, 00:00:33.827 NVME_PMR=,, 00:00:33.827 NVME_ZNS=,, 00:00:33.827 NVME_MS=,, 00:00:33.827 NVME_FDP=,, 00:00:33.827 SPDK_VAGRANT_DISTRO=fedora39 00:00:33.827 SPDK_VAGRANT_VMCPU=10 00:00:33.827 SPDK_VAGRANT_VMRAM=12288 00:00:33.827 SPDK_VAGRANT_PROVIDER=libvirt 00:00:33.827 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:33.827 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:33.827 SPDK_OPENSTACK_NETWORK=0 00:00:33.827 VAGRANT_PACKAGE_BOX=0 00:00:33.827 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:33.827 FORCE_DISTRO=true 00:00:33.827 VAGRANT_BOX_VERSION= 00:00:33.827 EXTRA_VAGRANTFILES= 00:00:33.827 NIC_MODEL=e1000 00:00:33.827 00:00:33.827 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:33.827 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:37.109 Bringing machine 'default' up with 'libvirt' provider... 00:00:37.676 ==> default: Creating image (snapshot of base box volume). 00:00:37.934 ==> default: Creating domain with the following settings... 00:00:37.934 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732647004_dbd97467140c2ec6eb5a 00:00:37.934 ==> default: -- Domain type: kvm 00:00:37.934 ==> default: -- Cpus: 10 00:00:37.934 ==> default: -- Feature: acpi 00:00:37.934 ==> default: -- Feature: apic 00:00:37.934 ==> default: -- Feature: pae 00:00:37.934 ==> default: -- Memory: 12288M 00:00:37.934 ==> default: -- Memory Backing: hugepages: 00:00:37.934 ==> default: -- Management MAC: 00:00:37.934 ==> default: -- Loader: 00:00:37.934 ==> default: -- Nvram: 00:00:37.934 ==> default: -- Base box: spdk/fedora39 00:00:37.934 ==> default: -- Storage pool: default 00:00:37.934 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732647004_dbd97467140c2ec6eb5a.img (20G) 00:00:37.934 ==> default: -- Volume Cache: default 00:00:37.934 ==> default: -- Kernel: 00:00:37.934 ==> default: -- Initrd: 00:00:37.934 ==> default: -- Graphics Type: vnc 00:00:37.934 ==> default: -- Graphics Port: -1 00:00:37.934 ==> default: -- Graphics IP: 127.0.0.1 00:00:37.934 ==> default: -- Graphics Password: Not defined 00:00:37.934 ==> default: -- Video Type: cirrus 00:00:37.934 ==> default: -- Video VRAM: 9216 00:00:37.934 ==> default: -- Sound Type: 00:00:37.934 ==> default: -- Keymap: en-us 00:00:37.934 ==> default: -- TPM Path: 00:00:37.934 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:37.934 ==> default: -- Command line args: 00:00:37.934 ==> default: -> value=-device, 00:00:37.934 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:37.934 ==> default: -> value=-drive, 00:00:37.934 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:00:37.934 ==> default: -> value=-device, 00:00:37.934 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.934 ==> default: -> value=-device, 00:00:37.934 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:37.934 ==> default: -> value=-drive, 00:00:37.934 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:37.934 ==> default: -> value=-device, 00:00:37.934 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.934 ==> default: -> value=-drive, 00:00:37.934 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:37.934 ==> default: -> value=-device, 00:00:37.935 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.935 ==> default: -> value=-drive, 00:00:37.935 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:37.935 ==> default: -> value=-device, 00:00:37.935 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:38.193 ==> default: Creating shared folders metadata... 00:00:38.193 ==> default: Starting domain. 00:00:40.095 ==> default: Waiting for domain to get an IP address... 00:00:58.173 ==> default: Waiting for SSH to become available... 00:00:59.544 ==> default: Configuring and enabling network interfaces... 00:01:04.808 default: SSH address: 192.168.121.182:22 00:01:04.808 default: SSH username: vagrant 00:01:04.808 default: SSH auth method: private key 00:01:06.182 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:14.290 ==> default: Mounting SSHFS shared folder... 00:01:15.661 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:15.661 ==> default: Checking Mount.. 00:01:16.593 ==> default: Folder Successfully Mounted! 00:01:16.593 ==> default: Running provisioner: file... 00:01:17.525 default: ~/.gitconfig => .gitconfig 00:01:17.784 00:01:17.784 SUCCESS! 00:01:17.784 00:01:17.784 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:17.784 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:17.784 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:17.784 00:01:17.792 [Pipeline] } 00:01:17.808 [Pipeline] // stage 00:01:17.818 [Pipeline] dir 00:01:17.819 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:17.820 [Pipeline] { 00:01:17.834 [Pipeline] catchError 00:01:17.836 [Pipeline] { 00:01:17.851 [Pipeline] sh 00:01:18.300 + vagrant ssh-config --host vagrant 00:01:18.300 + sed -ne /^Host/,$p 00:01:18.300 + tee ssh_conf 00:01:22.479 Host vagrant 00:01:22.479 HostName 192.168.121.182 00:01:22.479 User vagrant 00:01:22.479 Port 22 00:01:22.479 UserKnownHostsFile /dev/null 00:01:22.479 StrictHostKeyChecking no 00:01:22.479 PasswordAuthentication no 00:01:22.479 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:22.479 IdentitiesOnly yes 00:01:22.479 LogLevel FATAL 00:01:22.479 ForwardAgent yes 00:01:22.479 ForwardX11 yes 00:01:22.479 00:01:22.493 [Pipeline] withEnv 00:01:22.495 [Pipeline] { 00:01:22.507 [Pipeline] sh 00:01:22.782 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:22.782 source /etc/os-release 00:01:22.782 [[ -e /image.version ]] && img=$(< /image.version) 00:01:22.782 # Minimal, systemd-like check. 00:01:22.782 if [[ -e /.dockerenv ]]; then 00:01:22.782 # Clear garbage from the node's name: 00:01:22.782 # agt-er_autotest_547-896 -> autotest_547-896 00:01:22.782 # $HOSTNAME is the actual container id 00:01:22.782 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:22.782 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:22.782 # We can assume this is a mount from a host where container is running, 00:01:22.782 # so fetch its hostname to easily identify the target swarm worker. 00:01:22.782 container="$(< /etc/hostname) ($agent)" 00:01:22.782 else 00:01:22.782 # Fallback 00:01:22.782 container=$agent 00:01:22.782 fi 00:01:22.782 fi 00:01:22.782 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:22.782 00:01:22.794 [Pipeline] } 00:01:22.838 [Pipeline] // withEnv 00:01:22.846 [Pipeline] setCustomBuildProperty 00:01:22.858 [Pipeline] stage 00:01:22.860 [Pipeline] { (Tests) 00:01:22.870 [Pipeline] sh 00:01:23.140 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:23.411 [Pipeline] sh 00:01:23.686 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:23.700 [Pipeline] timeout 00:01:23.700 Timeout set to expire in 1 hr 30 min 00:01:23.702 [Pipeline] { 00:01:23.716 [Pipeline] sh 00:01:23.996 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:24.562 HEAD is now at 971ec0126 bdevperf: Add hide_metadata option 00:01:24.573 [Pipeline] sh 00:01:24.850 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:25.121 [Pipeline] sh 00:01:25.398 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:25.671 [Pipeline] sh 00:01:25.955 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:26.212 ++ readlink -f spdk_repo 00:01:26.212 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:26.212 + [[ -n /home/vagrant/spdk_repo ]] 00:01:26.212 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:26.212 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:26.212 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:26.212 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:26.212 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:26.212 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:26.212 + cd /home/vagrant/spdk_repo 00:01:26.212 + source /etc/os-release 00:01:26.212 ++ NAME='Fedora Linux' 00:01:26.212 ++ VERSION='39 (Cloud Edition)' 00:01:26.212 ++ ID=fedora 00:01:26.212 ++ VERSION_ID=39 00:01:26.212 ++ VERSION_CODENAME= 00:01:26.212 ++ PLATFORM_ID=platform:f39 00:01:26.212 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:26.212 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:26.212 ++ LOGO=fedora-logo-icon 00:01:26.212 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:26.212 ++ HOME_URL=https://fedoraproject.org/ 00:01:26.212 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:26.212 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:26.212 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:26.212 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:26.212 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:26.212 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:26.212 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:26.212 ++ SUPPORT_END=2024-11-12 00:01:26.212 ++ VARIANT='Cloud Edition' 00:01:26.212 ++ VARIANT_ID=cloud 00:01:26.212 + uname -a 00:01:26.212 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:26.212 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:26.494 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:26.494 Hugepages 00:01:26.494 node hugesize free / total 00:01:26.494 node0 1048576kB 0 / 0 00:01:26.494 node0 2048kB 0 / 0 00:01:26.494 00:01:26.494 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:26.494 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:26.752 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:26.752 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:26.752 + rm -f /tmp/spdk-ld-path 00:01:26.752 + source autorun-spdk.conf 00:01:26.752 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.752 ++ SPDK_RUN_ASAN=1 00:01:26.752 ++ SPDK_RUN_UBSAN=1 00:01:26.752 ++ SPDK_TEST_RAID=1 00:01:26.752 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.752 ++ RUN_NIGHTLY=0 00:01:26.752 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:26.752 + [[ -n '' ]] 00:01:26.752 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:26.752 + for M in /var/spdk/build-*-manifest.txt 00:01:26.752 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:26.752 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.752 + for M in /var/spdk/build-*-manifest.txt 00:01:26.752 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:26.752 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.752 + for M in /var/spdk/build-*-manifest.txt 00:01:26.752 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:26.752 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.752 ++ uname 00:01:26.752 + [[ Linux == \L\i\n\u\x ]] 00:01:26.752 + sudo dmesg -T 00:01:26.752 + sudo dmesg --clear 00:01:26.752 + dmesg_pid=5363 00:01:26.752 + sudo dmesg -Tw 00:01:26.752 + [[ Fedora Linux == FreeBSD ]] 00:01:26.752 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:26.752 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:26.752 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:26.752 + [[ -x /usr/src/fio-static/fio ]] 00:01:26.752 + export FIO_BIN=/usr/src/fio-static/fio 00:01:26.752 + FIO_BIN=/usr/src/fio-static/fio 00:01:26.752 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:26.752 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:26.752 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:26.752 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:26.752 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:26.752 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:26.752 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:26.752 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:26.752 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:26.752 18:50:53 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:26.752 18:50:53 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:26.752 18:50:53 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.752 18:50:53 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:26.752 18:50:53 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:26.752 18:50:53 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:26.752 18:50:53 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.752 18:50:53 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:26.752 18:50:53 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:26.752 18:50:53 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:27.011 18:50:53 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:27.011 18:50:53 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:27.011 18:50:53 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:27.011 18:50:53 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:27.011 18:50:53 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:27.011 18:50:53 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:27.011 18:50:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.011 18:50:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.011 18:50:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.011 18:50:53 -- paths/export.sh@5 -- $ export PATH 00:01:27.011 18:50:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.011 18:50:53 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:27.011 18:50:53 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:27.011 18:50:53 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732647053.XXXXXX 00:01:27.011 18:50:53 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732647053.pH80MW 00:01:27.011 18:50:53 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:27.011 18:50:53 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:27.011 18:50:53 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:27.011 18:50:53 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:27.011 18:50:53 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:27.011 18:50:53 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:27.011 18:50:53 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:27.011 18:50:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.011 18:50:53 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:27.011 18:50:53 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:27.011 18:50:53 -- pm/common@17 -- $ local monitor 00:01:27.011 18:50:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.011 18:50:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.011 18:50:53 -- pm/common@25 -- $ sleep 1 00:01:27.011 18:50:53 -- pm/common@21 -- $ date +%s 00:01:27.011 18:50:53 -- pm/common@21 -- $ date +%s 00:01:27.011 18:50:53 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732647053 00:01:27.012 18:50:53 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732647053 00:01:27.012 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732647053_collect-cpu-load.pm.log 00:01:27.012 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732647053_collect-vmstat.pm.log 00:01:27.945 18:50:54 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:27.945 18:50:54 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:27.945 18:50:54 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:27.945 18:50:54 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:27.945 18:50:54 -- spdk/autobuild.sh@16 -- $ date -u 00:01:27.945 Tue Nov 26 06:50:54 PM UTC 2024 00:01:27.945 18:50:54 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:27.945 v25.01-pre-260-g971ec0126 00:01:27.945 18:50:54 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:27.945 18:50:54 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:27.945 18:50:54 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:27.945 18:50:54 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:27.945 18:50:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.945 ************************************ 00:01:27.945 START TEST asan 00:01:27.945 ************************************ 00:01:27.945 using asan 00:01:27.945 18:50:54 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:27.945 00:01:27.945 real 0m0.000s 00:01:27.945 user 0m0.000s 00:01:27.945 sys 0m0.000s 00:01:27.945 18:50:54 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:27.945 ************************************ 00:01:27.945 18:50:54 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:27.945 END TEST asan 00:01:27.945 ************************************ 00:01:27.945 18:50:54 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:27.945 18:50:54 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:27.945 18:50:54 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:27.945 18:50:54 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:27.945 18:50:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.945 ************************************ 00:01:27.945 START TEST ubsan 00:01:27.945 ************************************ 00:01:27.945 using ubsan 00:01:27.945 18:50:54 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:27.945 00:01:27.945 real 0m0.000s 00:01:27.945 user 0m0.000s 00:01:27.945 sys 0m0.000s 00:01:27.945 18:50:54 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:27.945 ************************************ 00:01:27.945 END TEST ubsan 00:01:27.945 ************************************ 00:01:27.945 18:50:54 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:27.945 18:50:54 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:27.945 18:50:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:27.945 18:50:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:27.945 18:50:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:27.945 18:50:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:27.945 18:50:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:27.946 18:50:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:27.946 18:50:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:27.946 18:50:54 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:28.204 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:28.204 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:28.463 Using 'verbs' RDMA provider 00:01:44.260 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:56.522 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:56.522 Creating mk/config.mk...done. 00:01:56.522 Creating mk/cc.flags.mk...done. 00:01:56.522 Type 'make' to build. 00:01:56.522 18:51:22 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:56.522 18:51:22 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:56.522 18:51:22 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:56.522 18:51:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.522 ************************************ 00:01:56.522 START TEST make 00:01:56.522 ************************************ 00:01:56.522 18:51:22 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:56.522 make[1]: Nothing to be done for 'all'. 00:02:11.428 The Meson build system 00:02:11.428 Version: 1.5.0 00:02:11.428 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:11.428 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:11.428 Build type: native build 00:02:11.428 Program cat found: YES (/usr/bin/cat) 00:02:11.428 Project name: DPDK 00:02:11.428 Project version: 24.03.0 00:02:11.428 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:11.428 C linker for the host machine: cc ld.bfd 2.40-14 00:02:11.428 Host machine cpu family: x86_64 00:02:11.428 Host machine cpu: x86_64 00:02:11.428 Message: ## Building in Developer Mode ## 00:02:11.428 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:11.428 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:11.428 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:11.428 Program python3 found: YES (/usr/bin/python3) 00:02:11.428 Program cat found: YES (/usr/bin/cat) 00:02:11.428 Compiler for C supports arguments -march=native: YES 00:02:11.428 Checking for size of "void *" : 8 00:02:11.428 Checking for size of "void *" : 8 (cached) 00:02:11.428 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:11.428 Library m found: YES 00:02:11.428 Library numa found: YES 00:02:11.428 Has header "numaif.h" : YES 00:02:11.428 Library fdt found: NO 00:02:11.428 Library execinfo found: NO 00:02:11.428 Has header "execinfo.h" : YES 00:02:11.428 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:11.428 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:11.428 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:11.428 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:11.428 Run-time dependency openssl found: YES 3.1.1 00:02:11.428 Run-time dependency libpcap found: YES 1.10.4 00:02:11.428 Has header "pcap.h" with dependency libpcap: YES 00:02:11.428 Compiler for C supports arguments -Wcast-qual: YES 00:02:11.428 Compiler for C supports arguments -Wdeprecated: YES 00:02:11.428 Compiler for C supports arguments -Wformat: YES 00:02:11.428 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:11.428 Compiler for C supports arguments -Wformat-security: NO 00:02:11.428 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:11.428 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:11.428 Compiler for C supports arguments -Wnested-externs: YES 00:02:11.428 Compiler for C supports arguments -Wold-style-definition: YES 00:02:11.428 Compiler for C supports arguments -Wpointer-arith: YES 00:02:11.428 Compiler for C supports arguments -Wsign-compare: YES 00:02:11.428 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:11.428 Compiler for C supports arguments -Wundef: YES 00:02:11.428 Compiler for C supports arguments -Wwrite-strings: YES 00:02:11.428 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:11.428 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:11.428 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:11.428 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:11.428 Program objdump found: YES (/usr/bin/objdump) 00:02:11.428 Compiler for C supports arguments -mavx512f: YES 00:02:11.428 Checking if "AVX512 checking" compiles: YES 00:02:11.428 Fetching value of define "__SSE4_2__" : 1 00:02:11.428 Fetching value of define "__AES__" : 1 00:02:11.428 Fetching value of define "__AVX__" : 1 00:02:11.428 Fetching value of define "__AVX2__" : 1 00:02:11.428 Fetching value of define "__AVX512BW__" : (undefined) 00:02:11.428 Fetching value of define "__AVX512CD__" : (undefined) 00:02:11.428 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:11.428 Fetching value of define "__AVX512F__" : (undefined) 00:02:11.428 Fetching value of define "__AVX512VL__" : (undefined) 00:02:11.428 Fetching value of define "__PCLMUL__" : 1 00:02:11.428 Fetching value of define "__RDRND__" : 1 00:02:11.428 Fetching value of define "__RDSEED__" : 1 00:02:11.428 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:11.428 Fetching value of define "__znver1__" : (undefined) 00:02:11.428 Fetching value of define "__znver2__" : (undefined) 00:02:11.428 Fetching value of define "__znver3__" : (undefined) 00:02:11.428 Fetching value of define "__znver4__" : (undefined) 00:02:11.428 Library asan found: YES 00:02:11.429 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:11.429 Message: lib/log: Defining dependency "log" 00:02:11.429 Message: lib/kvargs: Defining dependency "kvargs" 00:02:11.429 Message: lib/telemetry: Defining dependency "telemetry" 00:02:11.429 Library rt found: YES 00:02:11.429 Checking for function "getentropy" : NO 00:02:11.429 Message: lib/eal: Defining dependency "eal" 00:02:11.429 Message: lib/ring: Defining dependency "ring" 00:02:11.429 Message: lib/rcu: Defining dependency "rcu" 00:02:11.429 Message: lib/mempool: Defining dependency "mempool" 00:02:11.429 Message: lib/mbuf: Defining dependency "mbuf" 00:02:11.429 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:11.429 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:11.429 Compiler for C supports arguments -mpclmul: YES 00:02:11.429 Compiler for C supports arguments -maes: YES 00:02:11.429 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:11.429 Compiler for C supports arguments -mavx512bw: YES 00:02:11.429 Compiler for C supports arguments -mavx512dq: YES 00:02:11.429 Compiler for C supports arguments -mavx512vl: YES 00:02:11.429 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:11.429 Compiler for C supports arguments -mavx2: YES 00:02:11.429 Compiler for C supports arguments -mavx: YES 00:02:11.429 Message: lib/net: Defining dependency "net" 00:02:11.429 Message: lib/meter: Defining dependency "meter" 00:02:11.429 Message: lib/ethdev: Defining dependency "ethdev" 00:02:11.429 Message: lib/pci: Defining dependency "pci" 00:02:11.429 Message: lib/cmdline: Defining dependency "cmdline" 00:02:11.429 Message: lib/hash: Defining dependency "hash" 00:02:11.429 Message: lib/timer: Defining dependency "timer" 00:02:11.429 Message: lib/compressdev: Defining dependency "compressdev" 00:02:11.429 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:11.429 Message: lib/dmadev: Defining dependency "dmadev" 00:02:11.429 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:11.429 Message: lib/power: Defining dependency "power" 00:02:11.429 Message: lib/reorder: Defining dependency "reorder" 00:02:11.429 Message: lib/security: Defining dependency "security" 00:02:11.429 Has header "linux/userfaultfd.h" : YES 00:02:11.429 Has header "linux/vduse.h" : YES 00:02:11.429 Message: lib/vhost: Defining dependency "vhost" 00:02:11.429 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:11.429 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:11.429 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:11.429 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:11.429 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:11.429 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:11.429 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:11.429 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:11.429 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:11.429 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:11.429 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:11.429 Configuring doxy-api-html.conf using configuration 00:02:11.429 Configuring doxy-api-man.conf using configuration 00:02:11.429 Program mandb found: YES (/usr/bin/mandb) 00:02:11.429 Program sphinx-build found: NO 00:02:11.429 Configuring rte_build_config.h using configuration 00:02:11.429 Message: 00:02:11.429 ================= 00:02:11.429 Applications Enabled 00:02:11.429 ================= 00:02:11.429 00:02:11.429 apps: 00:02:11.429 00:02:11.429 00:02:11.429 Message: 00:02:11.429 ================= 00:02:11.429 Libraries Enabled 00:02:11.429 ================= 00:02:11.429 00:02:11.429 libs: 00:02:11.429 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:11.429 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:11.429 cryptodev, dmadev, power, reorder, security, vhost, 00:02:11.429 00:02:11.429 Message: 00:02:11.429 =============== 00:02:11.429 Drivers Enabled 00:02:11.429 =============== 00:02:11.429 00:02:11.429 common: 00:02:11.429 00:02:11.429 bus: 00:02:11.429 pci, vdev, 00:02:11.429 mempool: 00:02:11.429 ring, 00:02:11.429 dma: 00:02:11.429 00:02:11.429 net: 00:02:11.429 00:02:11.429 crypto: 00:02:11.429 00:02:11.429 compress: 00:02:11.429 00:02:11.429 vdpa: 00:02:11.429 00:02:11.429 00:02:11.429 Message: 00:02:11.429 ================= 00:02:11.429 Content Skipped 00:02:11.429 ================= 00:02:11.429 00:02:11.429 apps: 00:02:11.429 dumpcap: explicitly disabled via build config 00:02:11.429 graph: explicitly disabled via build config 00:02:11.429 pdump: explicitly disabled via build config 00:02:11.429 proc-info: explicitly disabled via build config 00:02:11.429 test-acl: explicitly disabled via build config 00:02:11.429 test-bbdev: explicitly disabled via build config 00:02:11.429 test-cmdline: explicitly disabled via build config 00:02:11.429 test-compress-perf: explicitly disabled via build config 00:02:11.429 test-crypto-perf: explicitly disabled via build config 00:02:11.429 test-dma-perf: explicitly disabled via build config 00:02:11.429 test-eventdev: explicitly disabled via build config 00:02:11.429 test-fib: explicitly disabled via build config 00:02:11.429 test-flow-perf: explicitly disabled via build config 00:02:11.429 test-gpudev: explicitly disabled via build config 00:02:11.429 test-mldev: explicitly disabled via build config 00:02:11.429 test-pipeline: explicitly disabled via build config 00:02:11.429 test-pmd: explicitly disabled via build config 00:02:11.429 test-regex: explicitly disabled via build config 00:02:11.429 test-sad: explicitly disabled via build config 00:02:11.429 test-security-perf: explicitly disabled via build config 00:02:11.429 00:02:11.429 libs: 00:02:11.429 argparse: explicitly disabled via build config 00:02:11.429 metrics: explicitly disabled via build config 00:02:11.429 acl: explicitly disabled via build config 00:02:11.429 bbdev: explicitly disabled via build config 00:02:11.429 bitratestats: explicitly disabled via build config 00:02:11.429 bpf: explicitly disabled via build config 00:02:11.429 cfgfile: explicitly disabled via build config 00:02:11.429 distributor: explicitly disabled via build config 00:02:11.429 efd: explicitly disabled via build config 00:02:11.429 eventdev: explicitly disabled via build config 00:02:11.429 dispatcher: explicitly disabled via build config 00:02:11.429 gpudev: explicitly disabled via build config 00:02:11.429 gro: explicitly disabled via build config 00:02:11.429 gso: explicitly disabled via build config 00:02:11.429 ip_frag: explicitly disabled via build config 00:02:11.429 jobstats: explicitly disabled via build config 00:02:11.429 latencystats: explicitly disabled via build config 00:02:11.429 lpm: explicitly disabled via build config 00:02:11.429 member: explicitly disabled via build config 00:02:11.429 pcapng: explicitly disabled via build config 00:02:11.429 rawdev: explicitly disabled via build config 00:02:11.429 regexdev: explicitly disabled via build config 00:02:11.429 mldev: explicitly disabled via build config 00:02:11.429 rib: explicitly disabled via build config 00:02:11.430 sched: explicitly disabled via build config 00:02:11.430 stack: explicitly disabled via build config 00:02:11.430 ipsec: explicitly disabled via build config 00:02:11.430 pdcp: explicitly disabled via build config 00:02:11.430 fib: explicitly disabled via build config 00:02:11.430 port: explicitly disabled via build config 00:02:11.430 pdump: explicitly disabled via build config 00:02:11.430 table: explicitly disabled via build config 00:02:11.430 pipeline: explicitly disabled via build config 00:02:11.430 graph: explicitly disabled via build config 00:02:11.430 node: explicitly disabled via build config 00:02:11.430 00:02:11.430 drivers: 00:02:11.430 common/cpt: not in enabled drivers build config 00:02:11.430 common/dpaax: not in enabled drivers build config 00:02:11.430 common/iavf: not in enabled drivers build config 00:02:11.430 common/idpf: not in enabled drivers build config 00:02:11.430 common/ionic: not in enabled drivers build config 00:02:11.430 common/mvep: not in enabled drivers build config 00:02:11.430 common/octeontx: not in enabled drivers build config 00:02:11.430 bus/auxiliary: not in enabled drivers build config 00:02:11.430 bus/cdx: not in enabled drivers build config 00:02:11.430 bus/dpaa: not in enabled drivers build config 00:02:11.430 bus/fslmc: not in enabled drivers build config 00:02:11.430 bus/ifpga: not in enabled drivers build config 00:02:11.430 bus/platform: not in enabled drivers build config 00:02:11.430 bus/uacce: not in enabled drivers build config 00:02:11.430 bus/vmbus: not in enabled drivers build config 00:02:11.430 common/cnxk: not in enabled drivers build config 00:02:11.430 common/mlx5: not in enabled drivers build config 00:02:11.430 common/nfp: not in enabled drivers build config 00:02:11.430 common/nitrox: not in enabled drivers build config 00:02:11.430 common/qat: not in enabled drivers build config 00:02:11.430 common/sfc_efx: not in enabled drivers build config 00:02:11.430 mempool/bucket: not in enabled drivers build config 00:02:11.430 mempool/cnxk: not in enabled drivers build config 00:02:11.430 mempool/dpaa: not in enabled drivers build config 00:02:11.430 mempool/dpaa2: not in enabled drivers build config 00:02:11.430 mempool/octeontx: not in enabled drivers build config 00:02:11.430 mempool/stack: not in enabled drivers build config 00:02:11.430 dma/cnxk: not in enabled drivers build config 00:02:11.430 dma/dpaa: not in enabled drivers build config 00:02:11.430 dma/dpaa2: not in enabled drivers build config 00:02:11.430 dma/hisilicon: not in enabled drivers build config 00:02:11.430 dma/idxd: not in enabled drivers build config 00:02:11.430 dma/ioat: not in enabled drivers build config 00:02:11.430 dma/skeleton: not in enabled drivers build config 00:02:11.430 net/af_packet: not in enabled drivers build config 00:02:11.430 net/af_xdp: not in enabled drivers build config 00:02:11.430 net/ark: not in enabled drivers build config 00:02:11.430 net/atlantic: not in enabled drivers build config 00:02:11.430 net/avp: not in enabled drivers build config 00:02:11.430 net/axgbe: not in enabled drivers build config 00:02:11.430 net/bnx2x: not in enabled drivers build config 00:02:11.430 net/bnxt: not in enabled drivers build config 00:02:11.430 net/bonding: not in enabled drivers build config 00:02:11.430 net/cnxk: not in enabled drivers build config 00:02:11.430 net/cpfl: not in enabled drivers build config 00:02:11.430 net/cxgbe: not in enabled drivers build config 00:02:11.430 net/dpaa: not in enabled drivers build config 00:02:11.430 net/dpaa2: not in enabled drivers build config 00:02:11.430 net/e1000: not in enabled drivers build config 00:02:11.430 net/ena: not in enabled drivers build config 00:02:11.430 net/enetc: not in enabled drivers build config 00:02:11.430 net/enetfec: not in enabled drivers build config 00:02:11.430 net/enic: not in enabled drivers build config 00:02:11.430 net/failsafe: not in enabled drivers build config 00:02:11.430 net/fm10k: not in enabled drivers build config 00:02:11.430 net/gve: not in enabled drivers build config 00:02:11.430 net/hinic: not in enabled drivers build config 00:02:11.430 net/hns3: not in enabled drivers build config 00:02:11.430 net/i40e: not in enabled drivers build config 00:02:11.430 net/iavf: not in enabled drivers build config 00:02:11.430 net/ice: not in enabled drivers build config 00:02:11.430 net/idpf: not in enabled drivers build config 00:02:11.430 net/igc: not in enabled drivers build config 00:02:11.430 net/ionic: not in enabled drivers build config 00:02:11.430 net/ipn3ke: not in enabled drivers build config 00:02:11.430 net/ixgbe: not in enabled drivers build config 00:02:11.430 net/mana: not in enabled drivers build config 00:02:11.430 net/memif: not in enabled drivers build config 00:02:11.430 net/mlx4: not in enabled drivers build config 00:02:11.430 net/mlx5: not in enabled drivers build config 00:02:11.430 net/mvneta: not in enabled drivers build config 00:02:11.430 net/mvpp2: not in enabled drivers build config 00:02:11.430 net/netvsc: not in enabled drivers build config 00:02:11.430 net/nfb: not in enabled drivers build config 00:02:11.430 net/nfp: not in enabled drivers build config 00:02:11.430 net/ngbe: not in enabled drivers build config 00:02:11.430 net/null: not in enabled drivers build config 00:02:11.430 net/octeontx: not in enabled drivers build config 00:02:11.430 net/octeon_ep: not in enabled drivers build config 00:02:11.430 net/pcap: not in enabled drivers build config 00:02:11.430 net/pfe: not in enabled drivers build config 00:02:11.430 net/qede: not in enabled drivers build config 00:02:11.430 net/ring: not in enabled drivers build config 00:02:11.430 net/sfc: not in enabled drivers build config 00:02:11.430 net/softnic: not in enabled drivers build config 00:02:11.430 net/tap: not in enabled drivers build config 00:02:11.430 net/thunderx: not in enabled drivers build config 00:02:11.430 net/txgbe: not in enabled drivers build config 00:02:11.430 net/vdev_netvsc: not in enabled drivers build config 00:02:11.430 net/vhost: not in enabled drivers build config 00:02:11.430 net/virtio: not in enabled drivers build config 00:02:11.430 net/vmxnet3: not in enabled drivers build config 00:02:11.430 raw/*: missing internal dependency, "rawdev" 00:02:11.430 crypto/armv8: not in enabled drivers build config 00:02:11.430 crypto/bcmfs: not in enabled drivers build config 00:02:11.430 crypto/caam_jr: not in enabled drivers build config 00:02:11.430 crypto/ccp: not in enabled drivers build config 00:02:11.430 crypto/cnxk: not in enabled drivers build config 00:02:11.430 crypto/dpaa_sec: not in enabled drivers build config 00:02:11.430 crypto/dpaa2_sec: not in enabled drivers build config 00:02:11.430 crypto/ipsec_mb: not in enabled drivers build config 00:02:11.430 crypto/mlx5: not in enabled drivers build config 00:02:11.430 crypto/mvsam: not in enabled drivers build config 00:02:11.430 crypto/nitrox: not in enabled drivers build config 00:02:11.430 crypto/null: not in enabled drivers build config 00:02:11.430 crypto/octeontx: not in enabled drivers build config 00:02:11.430 crypto/openssl: not in enabled drivers build config 00:02:11.430 crypto/scheduler: not in enabled drivers build config 00:02:11.430 crypto/uadk: not in enabled drivers build config 00:02:11.430 crypto/virtio: not in enabled drivers build config 00:02:11.430 compress/isal: not in enabled drivers build config 00:02:11.430 compress/mlx5: not in enabled drivers build config 00:02:11.430 compress/nitrox: not in enabled drivers build config 00:02:11.430 compress/octeontx: not in enabled drivers build config 00:02:11.430 compress/zlib: not in enabled drivers build config 00:02:11.430 regex/*: missing internal dependency, "regexdev" 00:02:11.430 ml/*: missing internal dependency, "mldev" 00:02:11.430 vdpa/ifc: not in enabled drivers build config 00:02:11.430 vdpa/mlx5: not in enabled drivers build config 00:02:11.430 vdpa/nfp: not in enabled drivers build config 00:02:11.430 vdpa/sfc: not in enabled drivers build config 00:02:11.430 event/*: missing internal dependency, "eventdev" 00:02:11.430 baseband/*: missing internal dependency, "bbdev" 00:02:11.430 gpu/*: missing internal dependency, "gpudev" 00:02:11.430 00:02:11.430 00:02:11.431 Build targets in project: 85 00:02:11.431 00:02:11.431 DPDK 24.03.0 00:02:11.431 00:02:11.431 User defined options 00:02:11.431 buildtype : debug 00:02:11.431 default_library : shared 00:02:11.431 libdir : lib 00:02:11.431 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:11.431 b_sanitize : address 00:02:11.431 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:11.431 c_link_args : 00:02:11.431 cpu_instruction_set: native 00:02:11.431 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:11.431 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:11.431 enable_docs : false 00:02:11.431 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:11.431 enable_kmods : false 00:02:11.431 max_lcores : 128 00:02:11.431 tests : false 00:02:11.431 00:02:11.431 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:11.431 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:11.431 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:11.431 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:11.431 [3/268] Linking static target lib/librte_kvargs.a 00:02:11.431 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:11.431 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:11.431 [6/268] Linking static target lib/librte_log.a 00:02:11.431 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.431 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:11.431 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:11.431 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:11.689 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:11.689 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:11.689 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:11.689 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:11.689 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:11.689 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:11.689 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:11.689 [18/268] Linking static target lib/librte_telemetry.a 00:02:11.949 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.949 [20/268] Linking target lib/librte_log.so.24.1 00:02:12.208 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:12.465 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:12.465 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:12.465 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:12.465 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:12.724 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:12.724 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:12.724 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:12.724 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.724 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:12.724 [31/268] Linking target lib/librte_telemetry.so.24.1 00:02:12.724 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:12.980 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:12.980 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:12.980 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:12.980 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:13.237 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:13.494 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:13.751 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:13.751 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:13.751 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:13.751 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:13.751 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:13.751 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:14.009 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:14.009 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:14.009 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:14.268 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:14.268 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:14.268 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:14.526 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:14.785 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:14.785 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:15.042 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:15.042 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:15.042 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:15.042 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:15.042 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:15.300 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:15.300 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:15.300 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:15.558 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:15.558 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:15.817 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:15.817 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:15.817 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:15.817 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:16.075 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:16.075 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:16.332 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:16.332 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:16.332 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:16.332 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:16.332 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:16.332 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:16.589 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:16.589 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:16.589 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:16.589 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:16.848 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:17.105 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:17.105 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:17.363 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:17.363 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:17.363 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:17.622 [86/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:17.622 [87/268] Linking static target lib/librte_rcu.a 00:02:17.622 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:17.622 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:17.622 [90/268] Linking static target lib/librte_ring.a 00:02:17.622 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:17.622 [92/268] Linking static target lib/librte_eal.a 00:02:17.880 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:17.880 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:17.880 [95/268] Linking static target lib/librte_mempool.a 00:02:17.880 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:17.880 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:17.880 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:17.880 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:18.138 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.138 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.397 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:18.654 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:18.654 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:18.654 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:18.654 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:18.654 [107/268] Linking static target lib/librte_net.a 00:02:18.912 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:18.912 [109/268] Linking static target lib/librte_mbuf.a 00:02:18.912 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:18.912 [111/268] Linking static target lib/librte_meter.a 00:02:19.170 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.170 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:19.170 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:19.428 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.428 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:19.428 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:19.428 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.996 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:20.254 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.254 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:20.254 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:20.513 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:20.770 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:20.770 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:20.770 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:20.770 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:20.770 [128/268] Linking static target lib/librte_pci.a 00:02:20.770 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:21.028 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:21.028 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:21.287 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:21.287 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.287 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:21.287 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:21.287 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:21.287 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:21.287 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:21.287 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:21.546 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:21.546 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:21.546 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:21.546 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:21.546 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:21.804 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:21.804 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:21.804 [147/268] Linking static target lib/librte_cmdline.a 00:02:22.372 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:22.372 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:22.372 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:22.372 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:22.713 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:22.713 [153/268] Linking static target lib/librte_timer.a 00:02:22.713 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:22.972 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:22.972 [156/268] Linking static target lib/librte_ethdev.a 00:02:22.972 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:22.972 [158/268] Linking static target lib/librte_hash.a 00:02:23.231 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:23.231 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:23.231 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:23.231 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.231 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:23.489 [164/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:23.489 [165/268] Linking static target lib/librte_compressdev.a 00:02:23.747 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.747 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:23.747 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:24.006 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:24.006 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:24.006 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:24.264 [172/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:24.264 [173/268] Linking static target lib/librte_dmadev.a 00:02:24.264 [174/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.524 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.524 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:24.524 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:24.782 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:24.782 [179/268] Linking static target lib/librte_cryptodev.a 00:02:24.782 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:25.042 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:25.042 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:25.042 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:25.042 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.299 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:25.558 [186/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:25.558 [187/268] Linking static target lib/librte_reorder.a 00:02:25.558 [188/268] Linking static target lib/librte_power.a 00:02:25.817 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:25.817 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:25.817 [191/268] Linking static target lib/librte_security.a 00:02:26.076 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:26.076 [193/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.076 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:26.643 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:26.643 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.901 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.901 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:26.902 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:27.467 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:27.467 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:27.467 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.467 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:27.724 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:27.725 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:27.982 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:27.982 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:28.241 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:28.241 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:28.241 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:28.241 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:28.499 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:28.499 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:28.757 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:28.757 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:28.757 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:28.757 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:28.757 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:28.757 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:28.757 [220/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:28.757 [221/268] Linking static target drivers/librte_bus_pci.a 00:02:29.016 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:29.016 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:29.016 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:29.016 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:29.016 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.273 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.208 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:30.465 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.723 [230/268] Linking target lib/librte_eal.so.24.1 00:02:30.980 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:30.980 [232/268] Linking target lib/librte_dmadev.so.24.1 00:02:30.980 [233/268] Linking target lib/librte_pci.so.24.1 00:02:30.980 [234/268] Linking target lib/librte_meter.so.24.1 00:02:30.980 [235/268] Linking target lib/librte_timer.so.24.1 00:02:30.980 [236/268] Linking target lib/librte_ring.so.24.1 00:02:30.980 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:30.980 [238/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:31.237 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:31.237 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:31.237 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:31.237 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:31.237 [243/268] Linking target lib/librte_rcu.so.24.1 00:02:31.237 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:31.237 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:31.237 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:31.495 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:31.495 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:31.495 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:31.800 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:31.800 [251/268] Linking target lib/librte_cryptodev.so.24.1 00:02:31.800 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:31.800 [253/268] Linking target lib/librte_reorder.so.24.1 00:02:31.800 [254/268] Linking target lib/librte_net.so.24.1 00:02:31.800 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:31.800 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:32.072 [257/268] Linking target lib/librte_security.so.24.1 00:02:32.072 [258/268] Linking target lib/librte_hash.so.24.1 00:02:32.072 [259/268] Linking target lib/librte_cmdline.so.24.1 00:02:32.072 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.072 [261/268] Linking target lib/librte_ethdev.so.24.1 00:02:32.072 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:32.331 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:32.331 [264/268] Linking target lib/librte_power.so.24.1 00:02:35.611 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:35.611 [266/268] Linking static target lib/librte_vhost.a 00:02:36.986 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.986 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:36.986 INFO: autodetecting backend as ninja 00:02:36.986 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:03.515 CC lib/ut_mock/mock.o 00:03:03.515 CC lib/log/log.o 00:03:03.515 CC lib/log/log_flags.o 00:03:03.515 CC lib/log/log_deprecated.o 00:03:03.515 CC lib/ut/ut.o 00:03:03.515 LIB libspdk_ut_mock.a 00:03:03.515 LIB libspdk_log.a 00:03:03.515 SO libspdk_ut_mock.so.6.0 00:03:03.515 LIB libspdk_ut.a 00:03:03.515 SO libspdk_log.so.7.1 00:03:03.515 SO libspdk_ut.so.2.0 00:03:03.515 SYMLINK libspdk_ut_mock.so 00:03:03.515 SYMLINK libspdk_log.so 00:03:03.515 SYMLINK libspdk_ut.so 00:03:03.515 CXX lib/trace_parser/trace.o 00:03:03.515 CC lib/ioat/ioat.o 00:03:03.515 CC lib/dma/dma.o 00:03:03.515 CC lib/util/base64.o 00:03:03.515 CC lib/util/cpuset.o 00:03:03.515 CC lib/util/bit_array.o 00:03:03.515 CC lib/util/crc32.o 00:03:03.515 CC lib/util/crc16.o 00:03:03.515 CC lib/util/crc32c.o 00:03:03.515 CC lib/vfio_user/host/vfio_user_pci.o 00:03:03.515 CC lib/vfio_user/host/vfio_user.o 00:03:03.515 CC lib/util/crc32_ieee.o 00:03:03.515 CC lib/util/crc64.o 00:03:03.515 LIB libspdk_dma.a 00:03:03.515 SO libspdk_dma.so.5.0 00:03:03.515 CC lib/util/dif.o 00:03:03.515 CC lib/util/fd.o 00:03:03.515 SYMLINK libspdk_dma.so 00:03:03.515 CC lib/util/file.o 00:03:03.515 CC lib/util/fd_group.o 00:03:03.515 CC lib/util/hexlify.o 00:03:03.515 CC lib/util/iov.o 00:03:03.515 CC lib/util/math.o 00:03:03.515 LIB libspdk_vfio_user.a 00:03:03.515 SO libspdk_vfio_user.so.5.0 00:03:03.515 CC lib/util/net.o 00:03:03.515 SYMLINK libspdk_vfio_user.so 00:03:03.515 LIB libspdk_ioat.a 00:03:03.515 CC lib/util/pipe.o 00:03:03.515 CC lib/util/strerror_tls.o 00:03:03.515 SO libspdk_ioat.so.7.0 00:03:03.515 CC lib/util/string.o 00:03:03.515 CC lib/util/uuid.o 00:03:03.515 SYMLINK libspdk_ioat.so 00:03:03.515 CC lib/util/xor.o 00:03:03.515 CC lib/util/zipf.o 00:03:03.515 CC lib/util/md5.o 00:03:03.515 LIB libspdk_util.a 00:03:03.515 LIB libspdk_trace_parser.a 00:03:03.515 SO libspdk_trace_parser.so.6.0 00:03:03.515 SO libspdk_util.so.10.1 00:03:03.515 SYMLINK libspdk_trace_parser.so 00:03:03.515 SYMLINK libspdk_util.so 00:03:03.515 CC lib/json/json_parse.o 00:03:03.515 CC lib/json/json_util.o 00:03:03.515 CC lib/json/json_write.o 00:03:03.515 CC lib/rdma_utils/rdma_utils.o 00:03:03.515 CC lib/idxd/idxd.o 00:03:03.515 CC lib/idxd/idxd_user.o 00:03:03.515 CC lib/idxd/idxd_kernel.o 00:03:03.515 CC lib/env_dpdk/env.o 00:03:03.515 CC lib/conf/conf.o 00:03:03.515 CC lib/vmd/vmd.o 00:03:03.515 CC lib/vmd/led.o 00:03:03.515 CC lib/env_dpdk/memory.o 00:03:03.515 CC lib/env_dpdk/pci.o 00:03:03.515 LIB libspdk_conf.a 00:03:03.515 CC lib/env_dpdk/init.o 00:03:03.515 CC lib/env_dpdk/threads.o 00:03:03.515 SO libspdk_conf.so.6.0 00:03:03.515 LIB libspdk_json.a 00:03:03.515 SO libspdk_json.so.6.0 00:03:03.515 SYMLINK libspdk_conf.so 00:03:03.515 CC lib/env_dpdk/pci_ioat.o 00:03:03.515 SYMLINK libspdk_json.so 00:03:03.515 CC lib/env_dpdk/pci_virtio.o 00:03:03.515 LIB libspdk_rdma_utils.a 00:03:03.773 SO libspdk_rdma_utils.so.1.0 00:03:03.773 SYMLINK libspdk_rdma_utils.so 00:03:03.773 CC lib/env_dpdk/pci_vmd.o 00:03:03.773 CC lib/env_dpdk/pci_idxd.o 00:03:03.773 CC lib/env_dpdk/pci_event.o 00:03:03.773 CC lib/jsonrpc/jsonrpc_server.o 00:03:03.773 CC lib/env_dpdk/sigbus_handler.o 00:03:04.031 CC lib/env_dpdk/pci_dpdk.o 00:03:04.031 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:04.031 LIB libspdk_idxd.a 00:03:04.031 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:04.031 SO libspdk_idxd.so.12.1 00:03:04.031 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:04.031 CC lib/jsonrpc/jsonrpc_client.o 00:03:04.031 SYMLINK libspdk_idxd.so 00:03:04.031 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:04.289 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:04.289 CC lib/rdma_provider/common.o 00:03:04.289 LIB libspdk_vmd.a 00:03:04.289 SO libspdk_vmd.so.6.0 00:03:04.289 SYMLINK libspdk_vmd.so 00:03:04.289 LIB libspdk_jsonrpc.a 00:03:04.547 LIB libspdk_rdma_provider.a 00:03:04.547 SO libspdk_jsonrpc.so.6.0 00:03:04.547 SO libspdk_rdma_provider.so.7.0 00:03:04.547 SYMLINK libspdk_jsonrpc.so 00:03:04.547 SYMLINK libspdk_rdma_provider.so 00:03:04.805 CC lib/rpc/rpc.o 00:03:05.063 LIB libspdk_rpc.a 00:03:05.063 SO libspdk_rpc.so.6.0 00:03:05.063 SYMLINK libspdk_rpc.so 00:03:05.063 LIB libspdk_env_dpdk.a 00:03:05.321 SO libspdk_env_dpdk.so.15.1 00:03:05.321 CC lib/trace/trace.o 00:03:05.321 CC lib/trace/trace_flags.o 00:03:05.321 CC lib/trace/trace_rpc.o 00:03:05.321 CC lib/keyring/keyring.o 00:03:05.321 CC lib/keyring/keyring_rpc.o 00:03:05.321 CC lib/notify/notify.o 00:03:05.321 CC lib/notify/notify_rpc.o 00:03:05.321 SYMLINK libspdk_env_dpdk.so 00:03:05.579 LIB libspdk_notify.a 00:03:05.579 SO libspdk_notify.so.6.0 00:03:05.579 LIB libspdk_keyring.a 00:03:05.579 LIB libspdk_trace.a 00:03:05.579 SYMLINK libspdk_notify.so 00:03:05.579 SO libspdk_keyring.so.2.0 00:03:05.838 SO libspdk_trace.so.11.0 00:03:05.838 SYMLINK libspdk_keyring.so 00:03:05.838 SYMLINK libspdk_trace.so 00:03:06.095 CC lib/thread/thread.o 00:03:06.095 CC lib/thread/iobuf.o 00:03:06.095 CC lib/sock/sock_rpc.o 00:03:06.095 CC lib/sock/sock.o 00:03:06.672 LIB libspdk_sock.a 00:03:06.672 SO libspdk_sock.so.10.0 00:03:06.930 SYMLINK libspdk_sock.so 00:03:07.187 CC lib/nvme/nvme_ctrlr.o 00:03:07.187 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:07.187 CC lib/nvme/nvme_fabric.o 00:03:07.187 CC lib/nvme/nvme_ns_cmd.o 00:03:07.187 CC lib/nvme/nvme_pcie_common.o 00:03:07.187 CC lib/nvme/nvme_qpair.o 00:03:07.187 CC lib/nvme/nvme_pcie.o 00:03:07.187 CC lib/nvme/nvme_ns.o 00:03:07.187 CC lib/nvme/nvme.o 00:03:08.120 CC lib/nvme/nvme_quirks.o 00:03:08.120 CC lib/nvme/nvme_transport.o 00:03:08.120 CC lib/nvme/nvme_discovery.o 00:03:08.120 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:08.377 LIB libspdk_thread.a 00:03:08.377 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:08.377 SO libspdk_thread.so.11.0 00:03:08.377 CC lib/nvme/nvme_tcp.o 00:03:08.377 CC lib/nvme/nvme_opal.o 00:03:08.377 SYMLINK libspdk_thread.so 00:03:08.377 CC lib/nvme/nvme_io_msg.o 00:03:08.634 CC lib/nvme/nvme_poll_group.o 00:03:08.634 CC lib/nvme/nvme_zns.o 00:03:08.892 CC lib/nvme/nvme_stubs.o 00:03:08.892 CC lib/nvme/nvme_auth.o 00:03:09.150 CC lib/nvme/nvme_cuse.o 00:03:09.150 CC lib/nvme/nvme_rdma.o 00:03:09.150 CC lib/accel/accel.o 00:03:09.407 CC lib/blob/blobstore.o 00:03:09.407 CC lib/accel/accel_rpc.o 00:03:09.407 CC lib/accel/accel_sw.o 00:03:09.407 CC lib/blob/request.o 00:03:09.664 CC lib/blob/zeroes.o 00:03:09.923 CC lib/blob/blob_bs_dev.o 00:03:09.923 CC lib/init/json_config.o 00:03:10.182 CC lib/init/subsystem.o 00:03:10.182 CC lib/virtio/virtio.o 00:03:10.182 CC lib/init/subsystem_rpc.o 00:03:10.182 CC lib/init/rpc.o 00:03:10.182 CC lib/virtio/virtio_vhost_user.o 00:03:10.440 CC lib/virtio/virtio_vfio_user.o 00:03:10.440 CC lib/virtio/virtio_pci.o 00:03:10.440 LIB libspdk_init.a 00:03:10.440 SO libspdk_init.so.6.0 00:03:10.440 CC lib/fsdev/fsdev.o 00:03:10.440 CC lib/fsdev/fsdev_io.o 00:03:10.440 SYMLINK libspdk_init.so 00:03:10.440 CC lib/fsdev/fsdev_rpc.o 00:03:10.698 CC lib/event/app.o 00:03:10.698 CC lib/event/reactor.o 00:03:10.698 CC lib/event/log_rpc.o 00:03:10.698 CC lib/event/app_rpc.o 00:03:10.698 LIB libspdk_accel.a 00:03:10.698 LIB libspdk_virtio.a 00:03:10.698 SO libspdk_accel.so.16.0 00:03:10.699 SO libspdk_virtio.so.7.0 00:03:10.958 LIB libspdk_nvme.a 00:03:10.958 CC lib/event/scheduler_static.o 00:03:10.958 SYMLINK libspdk_accel.so 00:03:10.958 SYMLINK libspdk_virtio.so 00:03:10.958 SO libspdk_nvme.so.15.0 00:03:11.216 CC lib/bdev/bdev.o 00:03:11.216 CC lib/bdev/bdev_zone.o 00:03:11.217 CC lib/bdev/bdev_rpc.o 00:03:11.217 CC lib/bdev/scsi_nvme.o 00:03:11.217 CC lib/bdev/part.o 00:03:11.217 LIB libspdk_fsdev.a 00:03:11.217 SO libspdk_fsdev.so.2.0 00:03:11.217 LIB libspdk_event.a 00:03:11.475 SO libspdk_event.so.14.0 00:03:11.475 SYMLINK libspdk_nvme.so 00:03:11.475 SYMLINK libspdk_fsdev.so 00:03:11.475 SYMLINK libspdk_event.so 00:03:11.732 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:12.680 LIB libspdk_fuse_dispatcher.a 00:03:12.680 SO libspdk_fuse_dispatcher.so.1.0 00:03:12.680 SYMLINK libspdk_fuse_dispatcher.so 00:03:14.060 LIB libspdk_blob.a 00:03:14.318 SO libspdk_blob.so.12.0 00:03:14.318 SYMLINK libspdk_blob.so 00:03:14.576 CC lib/lvol/lvol.o 00:03:14.576 CC lib/blobfs/blobfs.o 00:03:14.576 CC lib/blobfs/tree.o 00:03:14.835 LIB libspdk_bdev.a 00:03:15.094 SO libspdk_bdev.so.17.0 00:03:15.094 SYMLINK libspdk_bdev.so 00:03:15.352 CC lib/ublk/ublk.o 00:03:15.352 CC lib/ublk/ublk_rpc.o 00:03:15.352 CC lib/ftl/ftl_core.o 00:03:15.352 CC lib/ftl/ftl_init.o 00:03:15.352 CC lib/ftl/ftl_layout.o 00:03:15.352 CC lib/nvmf/ctrlr.o 00:03:15.352 CC lib/nbd/nbd.o 00:03:15.352 CC lib/scsi/dev.o 00:03:15.611 CC lib/scsi/lun.o 00:03:15.611 CC lib/ftl/ftl_debug.o 00:03:15.611 CC lib/ftl/ftl_io.o 00:03:15.869 CC lib/scsi/port.o 00:03:15.869 LIB libspdk_blobfs.a 00:03:15.869 CC lib/ftl/ftl_sb.o 00:03:15.869 CC lib/nvmf/ctrlr_discovery.o 00:03:15.869 SO libspdk_blobfs.so.11.0 00:03:15.869 CC lib/nbd/nbd_rpc.o 00:03:15.869 CC lib/nvmf/ctrlr_bdev.o 00:03:15.869 CC lib/scsi/scsi.o 00:03:16.128 SYMLINK libspdk_blobfs.so 00:03:16.128 CC lib/scsi/scsi_bdev.o 00:03:16.128 LIB libspdk_lvol.a 00:03:16.128 CC lib/scsi/scsi_pr.o 00:03:16.128 SO libspdk_lvol.so.11.0 00:03:16.128 CC lib/ftl/ftl_l2p.o 00:03:16.128 SYMLINK libspdk_lvol.so 00:03:16.128 CC lib/scsi/scsi_rpc.o 00:03:16.128 CC lib/nvmf/subsystem.o 00:03:16.128 LIB libspdk_nbd.a 00:03:16.128 SO libspdk_nbd.so.7.0 00:03:16.386 SYMLINK libspdk_nbd.so 00:03:16.386 CC lib/nvmf/nvmf.o 00:03:16.386 CC lib/scsi/task.o 00:03:16.386 LIB libspdk_ublk.a 00:03:16.386 CC lib/ftl/ftl_l2p_flat.o 00:03:16.386 SO libspdk_ublk.so.3.0 00:03:16.386 CC lib/nvmf/nvmf_rpc.o 00:03:16.386 SYMLINK libspdk_ublk.so 00:03:16.386 CC lib/nvmf/transport.o 00:03:16.644 CC lib/nvmf/tcp.o 00:03:16.645 CC lib/nvmf/stubs.o 00:03:16.645 CC lib/ftl/ftl_nv_cache.o 00:03:16.645 LIB libspdk_scsi.a 00:03:16.903 SO libspdk_scsi.so.9.0 00:03:16.903 SYMLINK libspdk_scsi.so 00:03:16.903 CC lib/ftl/ftl_band.o 00:03:16.903 CC lib/ftl/ftl_band_ops.o 00:03:17.162 CC lib/nvmf/mdns_server.o 00:03:17.421 CC lib/ftl/ftl_writer.o 00:03:17.421 CC lib/ftl/ftl_rq.o 00:03:17.679 CC lib/nvmf/rdma.o 00:03:17.679 CC lib/nvmf/auth.o 00:03:17.679 CC lib/ftl/ftl_reloc.o 00:03:17.679 CC lib/iscsi/conn.o 00:03:17.679 CC lib/ftl/ftl_l2p_cache.o 00:03:17.937 CC lib/vhost/vhost.o 00:03:17.937 CC lib/ftl/ftl_p2l.o 00:03:17.937 CC lib/ftl/ftl_p2l_log.o 00:03:17.937 CC lib/iscsi/init_grp.o 00:03:18.242 CC lib/iscsi/iscsi.o 00:03:18.242 CC lib/iscsi/param.o 00:03:18.501 CC lib/iscsi/portal_grp.o 00:03:18.501 CC lib/ftl/mngt/ftl_mngt.o 00:03:18.501 CC lib/vhost/vhost_rpc.o 00:03:18.501 CC lib/iscsi/tgt_node.o 00:03:18.760 CC lib/iscsi/iscsi_subsystem.o 00:03:18.760 CC lib/iscsi/iscsi_rpc.o 00:03:18.760 CC lib/iscsi/task.o 00:03:18.760 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:19.021 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:19.021 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:19.021 CC lib/vhost/vhost_scsi.o 00:03:19.021 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:19.279 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:19.279 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:19.279 CC lib/vhost/vhost_blk.o 00:03:19.279 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:19.279 CC lib/vhost/rte_vhost_user.o 00:03:19.279 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:19.537 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:19.537 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:19.537 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:19.537 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:19.796 CC lib/ftl/utils/ftl_conf.o 00:03:19.796 CC lib/ftl/utils/ftl_md.o 00:03:19.796 CC lib/ftl/utils/ftl_mempool.o 00:03:19.796 CC lib/ftl/utils/ftl_bitmap.o 00:03:20.053 CC lib/ftl/utils/ftl_property.o 00:03:20.053 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:20.053 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:20.053 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:20.053 LIB libspdk_iscsi.a 00:03:20.054 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:20.312 SO libspdk_iscsi.so.8.0 00:03:20.312 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:20.312 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:20.312 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:20.312 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:20.312 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:20.312 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:20.570 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:20.570 SYMLINK libspdk_iscsi.so 00:03:20.570 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:20.570 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:20.570 CC lib/ftl/base/ftl_base_dev.o 00:03:20.570 CC lib/ftl/base/ftl_base_bdev.o 00:03:20.570 CC lib/ftl/ftl_trace.o 00:03:20.570 LIB libspdk_vhost.a 00:03:20.570 SO libspdk_vhost.so.8.0 00:03:20.828 LIB libspdk_nvmf.a 00:03:20.828 SYMLINK libspdk_vhost.so 00:03:20.828 LIB libspdk_ftl.a 00:03:20.828 SO libspdk_nvmf.so.20.0 00:03:21.086 SO libspdk_ftl.so.9.0 00:03:21.348 SYMLINK libspdk_nvmf.so 00:03:21.605 SYMLINK libspdk_ftl.so 00:03:21.863 CC module/env_dpdk/env_dpdk_rpc.o 00:03:22.122 CC module/blob/bdev/blob_bdev.o 00:03:22.122 CC module/keyring/file/keyring.o 00:03:22.122 CC module/keyring/linux/keyring.o 00:03:22.122 CC module/sock/posix/posix.o 00:03:22.122 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:22.122 CC module/fsdev/aio/fsdev_aio.o 00:03:22.122 CC module/scheduler/gscheduler/gscheduler.o 00:03:22.122 CC module/accel/error/accel_error.o 00:03:22.122 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:22.122 LIB libspdk_env_dpdk_rpc.a 00:03:22.122 SO libspdk_env_dpdk_rpc.so.6.0 00:03:22.122 SYMLINK libspdk_env_dpdk_rpc.so 00:03:22.122 CC module/keyring/file/keyring_rpc.o 00:03:22.122 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:22.122 LIB libspdk_scheduler_dpdk_governor.a 00:03:22.380 CC module/keyring/linux/keyring_rpc.o 00:03:22.380 LIB libspdk_scheduler_gscheduler.a 00:03:22.380 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:22.380 LIB libspdk_scheduler_dynamic.a 00:03:22.380 SO libspdk_scheduler_gscheduler.so.4.0 00:03:22.380 SO libspdk_scheduler_dynamic.so.4.0 00:03:22.380 LIB libspdk_blob_bdev.a 00:03:22.380 LIB libspdk_keyring_file.a 00:03:22.380 CC module/accel/error/accel_error_rpc.o 00:03:22.380 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:22.380 SYMLINK libspdk_scheduler_gscheduler.so 00:03:22.380 CC module/fsdev/aio/linux_aio_mgr.o 00:03:22.380 SO libspdk_blob_bdev.so.12.0 00:03:22.380 SYMLINK libspdk_scheduler_dynamic.so 00:03:22.380 SO libspdk_keyring_file.so.2.0 00:03:22.380 LIB libspdk_keyring_linux.a 00:03:22.380 SYMLINK libspdk_blob_bdev.so 00:03:22.380 SYMLINK libspdk_keyring_file.so 00:03:22.380 SO libspdk_keyring_linux.so.1.0 00:03:22.637 LIB libspdk_accel_error.a 00:03:22.637 SYMLINK libspdk_keyring_linux.so 00:03:22.637 SO libspdk_accel_error.so.2.0 00:03:22.637 CC module/accel/dsa/accel_dsa.o 00:03:22.637 CC module/accel/iaa/accel_iaa.o 00:03:22.637 CC module/accel/ioat/accel_ioat.o 00:03:22.637 CC module/accel/dsa/accel_dsa_rpc.o 00:03:22.637 SYMLINK libspdk_accel_error.so 00:03:22.637 CC module/accel/iaa/accel_iaa_rpc.o 00:03:22.637 CC module/bdev/delay/vbdev_delay.o 00:03:22.896 CC module/blobfs/bdev/blobfs_bdev.o 00:03:22.896 CC module/bdev/error/vbdev_error.o 00:03:22.896 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:22.896 CC module/accel/ioat/accel_ioat_rpc.o 00:03:22.896 LIB libspdk_accel_iaa.a 00:03:22.896 SO libspdk_accel_iaa.so.3.0 00:03:22.896 LIB libspdk_fsdev_aio.a 00:03:22.896 LIB libspdk_accel_dsa.a 00:03:22.896 CC module/bdev/error/vbdev_error_rpc.o 00:03:22.896 LIB libspdk_accel_ioat.a 00:03:22.896 SYMLINK libspdk_accel_iaa.so 00:03:22.896 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:22.896 SO libspdk_accel_dsa.so.5.0 00:03:23.154 SO libspdk_fsdev_aio.so.1.0 00:03:23.154 SO libspdk_accel_ioat.so.6.0 00:03:23.154 LIB libspdk_blobfs_bdev.a 00:03:23.154 CC module/bdev/gpt/gpt.o 00:03:23.154 LIB libspdk_sock_posix.a 00:03:23.154 SO libspdk_blobfs_bdev.so.6.0 00:03:23.154 SO libspdk_sock_posix.so.6.0 00:03:23.154 SYMLINK libspdk_accel_ioat.so 00:03:23.154 SYMLINK libspdk_accel_dsa.so 00:03:23.154 CC module/bdev/gpt/vbdev_gpt.o 00:03:23.154 SYMLINK libspdk_fsdev_aio.so 00:03:23.154 SYMLINK libspdk_blobfs_bdev.so 00:03:23.154 SYMLINK libspdk_sock_posix.so 00:03:23.154 LIB libspdk_bdev_error.a 00:03:23.154 SO libspdk_bdev_error.so.6.0 00:03:23.154 LIB libspdk_bdev_delay.a 00:03:23.413 CC module/bdev/lvol/vbdev_lvol.o 00:03:23.413 SO libspdk_bdev_delay.so.6.0 00:03:23.413 CC module/bdev/nvme/bdev_nvme.o 00:03:23.413 CC module/bdev/null/bdev_null.o 00:03:23.413 CC module/bdev/malloc/bdev_malloc.o 00:03:23.413 SYMLINK libspdk_bdev_error.so 00:03:23.413 CC module/bdev/null/bdev_null_rpc.o 00:03:23.413 CC module/bdev/passthru/vbdev_passthru.o 00:03:23.413 SYMLINK libspdk_bdev_delay.so 00:03:23.413 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:23.413 CC module/bdev/raid/bdev_raid.o 00:03:23.413 LIB libspdk_bdev_gpt.a 00:03:23.413 CC module/bdev/split/vbdev_split.o 00:03:23.413 SO libspdk_bdev_gpt.so.6.0 00:03:23.671 CC module/bdev/raid/bdev_raid_rpc.o 00:03:23.671 SYMLINK libspdk_bdev_gpt.so 00:03:23.671 CC module/bdev/nvme/nvme_rpc.o 00:03:23.671 LIB libspdk_bdev_null.a 00:03:23.671 SO libspdk_bdev_null.so.6.0 00:03:23.671 CC module/bdev/split/vbdev_split_rpc.o 00:03:23.671 SYMLINK libspdk_bdev_null.so 00:03:23.957 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:23.957 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:23.957 LIB libspdk_bdev_split.a 00:03:23.957 SO libspdk_bdev_split.so.6.0 00:03:23.957 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:23.957 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:23.957 CC module/bdev/aio/bdev_aio.o 00:03:23.957 LIB libspdk_bdev_malloc.a 00:03:23.957 SO libspdk_bdev_malloc.so.6.0 00:03:23.957 CC module/bdev/ftl/bdev_ftl.o 00:03:23.957 LIB libspdk_bdev_passthru.a 00:03:23.957 SYMLINK libspdk_bdev_split.so 00:03:24.216 CC module/bdev/aio/bdev_aio_rpc.o 00:03:24.216 SO libspdk_bdev_passthru.so.6.0 00:03:24.216 SYMLINK libspdk_bdev_malloc.so 00:03:24.216 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:24.216 SYMLINK libspdk_bdev_passthru.so 00:03:24.216 CC module/bdev/nvme/bdev_mdns_client.o 00:03:24.474 CC module/bdev/iscsi/bdev_iscsi.o 00:03:24.474 CC module/bdev/raid/bdev_raid_sb.o 00:03:24.474 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:24.474 LIB libspdk_bdev_ftl.a 00:03:24.474 LIB libspdk_bdev_aio.a 00:03:24.474 CC module/bdev/nvme/vbdev_opal.o 00:03:24.474 LIB libspdk_bdev_lvol.a 00:03:24.475 SO libspdk_bdev_aio.so.6.0 00:03:24.475 SO libspdk_bdev_ftl.so.6.0 00:03:24.475 SO libspdk_bdev_lvol.so.6.0 00:03:24.475 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:24.475 SYMLINK libspdk_bdev_aio.so 00:03:24.475 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:24.475 SYMLINK libspdk_bdev_ftl.so 00:03:24.475 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:24.734 LIB libspdk_bdev_zone_block.a 00:03:24.734 SYMLINK libspdk_bdev_lvol.so 00:03:24.734 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:24.734 SO libspdk_bdev_zone_block.so.6.0 00:03:24.734 SYMLINK libspdk_bdev_zone_block.so 00:03:24.734 CC module/bdev/raid/raid0.o 00:03:24.734 CC module/bdev/raid/raid1.o 00:03:24.734 CC module/bdev/raid/concat.o 00:03:24.734 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:24.734 CC module/bdev/raid/raid5f.o 00:03:24.734 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:24.734 LIB libspdk_bdev_iscsi.a 00:03:24.992 SO libspdk_bdev_iscsi.so.6.0 00:03:24.992 SYMLINK libspdk_bdev_iscsi.so 00:03:25.250 LIB libspdk_bdev_virtio.a 00:03:25.250 SO libspdk_bdev_virtio.so.6.0 00:03:25.250 SYMLINK libspdk_bdev_virtio.so 00:03:25.509 LIB libspdk_bdev_raid.a 00:03:25.509 SO libspdk_bdev_raid.so.6.0 00:03:25.869 SYMLINK libspdk_bdev_raid.so 00:03:27.261 LIB libspdk_bdev_nvme.a 00:03:27.261 SO libspdk_bdev_nvme.so.7.1 00:03:27.261 SYMLINK libspdk_bdev_nvme.so 00:03:27.828 CC module/event/subsystems/keyring/keyring.o 00:03:27.828 CC module/event/subsystems/scheduler/scheduler.o 00:03:27.828 CC module/event/subsystems/iobuf/iobuf.o 00:03:27.828 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:27.828 CC module/event/subsystems/vmd/vmd.o 00:03:27.828 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:27.828 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:27.828 CC module/event/subsystems/fsdev/fsdev.o 00:03:27.828 CC module/event/subsystems/sock/sock.o 00:03:27.828 LIB libspdk_event_keyring.a 00:03:27.828 LIB libspdk_event_vhost_blk.a 00:03:27.828 SO libspdk_event_keyring.so.1.0 00:03:27.828 LIB libspdk_event_scheduler.a 00:03:27.828 SO libspdk_event_vhost_blk.so.3.0 00:03:27.828 LIB libspdk_event_vmd.a 00:03:27.828 LIB libspdk_event_sock.a 00:03:27.828 LIB libspdk_event_fsdev.a 00:03:27.828 SO libspdk_event_scheduler.so.4.0 00:03:27.828 LIB libspdk_event_iobuf.a 00:03:27.828 SO libspdk_event_fsdev.so.1.0 00:03:27.828 SO libspdk_event_sock.so.5.0 00:03:27.828 SO libspdk_event_vmd.so.6.0 00:03:28.086 SYMLINK libspdk_event_vhost_blk.so 00:03:28.086 SYMLINK libspdk_event_keyring.so 00:03:28.086 SO libspdk_event_iobuf.so.3.0 00:03:28.086 SYMLINK libspdk_event_scheduler.so 00:03:28.086 SYMLINK libspdk_event_fsdev.so 00:03:28.086 SYMLINK libspdk_event_vmd.so 00:03:28.086 SYMLINK libspdk_event_sock.so 00:03:28.086 SYMLINK libspdk_event_iobuf.so 00:03:28.344 CC module/event/subsystems/accel/accel.o 00:03:28.603 LIB libspdk_event_accel.a 00:03:28.603 SO libspdk_event_accel.so.6.0 00:03:28.603 SYMLINK libspdk_event_accel.so 00:03:28.862 CC module/event/subsystems/bdev/bdev.o 00:03:29.121 LIB libspdk_event_bdev.a 00:03:29.121 SO libspdk_event_bdev.so.6.0 00:03:29.381 SYMLINK libspdk_event_bdev.so 00:03:29.381 CC module/event/subsystems/ublk/ublk.o 00:03:29.381 CC module/event/subsystems/scsi/scsi.o 00:03:29.381 CC module/event/subsystems/nbd/nbd.o 00:03:29.381 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:29.381 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:29.640 LIB libspdk_event_ublk.a 00:03:29.640 LIB libspdk_event_nbd.a 00:03:29.640 SO libspdk_event_ublk.so.3.0 00:03:29.640 SO libspdk_event_nbd.so.6.0 00:03:29.640 LIB libspdk_event_scsi.a 00:03:29.640 SO libspdk_event_scsi.so.6.0 00:03:29.899 SYMLINK libspdk_event_ublk.so 00:03:29.899 SYMLINK libspdk_event_nbd.so 00:03:29.899 SYMLINK libspdk_event_scsi.so 00:03:29.899 LIB libspdk_event_nvmf.a 00:03:29.899 SO libspdk_event_nvmf.so.6.0 00:03:29.899 SYMLINK libspdk_event_nvmf.so 00:03:30.157 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:30.157 CC module/event/subsystems/iscsi/iscsi.o 00:03:30.157 LIB libspdk_event_vhost_scsi.a 00:03:30.415 SO libspdk_event_vhost_scsi.so.3.0 00:03:30.415 LIB libspdk_event_iscsi.a 00:03:30.415 SO libspdk_event_iscsi.so.6.0 00:03:30.415 SYMLINK libspdk_event_vhost_scsi.so 00:03:30.415 SYMLINK libspdk_event_iscsi.so 00:03:30.674 SO libspdk.so.6.0 00:03:30.674 SYMLINK libspdk.so 00:03:30.932 CXX app/trace/trace.o 00:03:30.932 CC app/trace_record/trace_record.o 00:03:30.932 CC app/spdk_lspci/spdk_lspci.o 00:03:30.932 CC app/spdk_nvme_identify/identify.o 00:03:30.932 CC app/spdk_nvme_perf/perf.o 00:03:30.932 CC app/iscsi_tgt/iscsi_tgt.o 00:03:30.932 CC app/nvmf_tgt/nvmf_main.o 00:03:30.932 CC app/spdk_tgt/spdk_tgt.o 00:03:30.932 CC test/thread/poller_perf/poller_perf.o 00:03:30.932 CC examples/util/zipf/zipf.o 00:03:30.932 LINK spdk_lspci 00:03:31.191 LINK spdk_tgt 00:03:31.191 LINK poller_perf 00:03:31.191 LINK iscsi_tgt 00:03:31.191 LINK nvmf_tgt 00:03:31.191 LINK zipf 00:03:31.191 LINK spdk_trace_record 00:03:31.191 CC app/spdk_nvme_discover/discovery_aer.o 00:03:31.449 LINK spdk_trace 00:03:31.449 TEST_HEADER include/spdk/accel.h 00:03:31.449 TEST_HEADER include/spdk/accel_module.h 00:03:31.449 TEST_HEADER include/spdk/assert.h 00:03:31.449 TEST_HEADER include/spdk/barrier.h 00:03:31.449 TEST_HEADER include/spdk/base64.h 00:03:31.449 TEST_HEADER include/spdk/bdev.h 00:03:31.449 TEST_HEADER include/spdk/bdev_module.h 00:03:31.449 TEST_HEADER include/spdk/bdev_zone.h 00:03:31.449 TEST_HEADER include/spdk/bit_array.h 00:03:31.708 TEST_HEADER include/spdk/bit_pool.h 00:03:31.708 TEST_HEADER include/spdk/blob_bdev.h 00:03:31.708 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:31.708 TEST_HEADER include/spdk/blobfs.h 00:03:31.708 CC app/spdk_top/spdk_top.o 00:03:31.708 TEST_HEADER include/spdk/blob.h 00:03:31.708 LINK spdk_nvme_discover 00:03:31.708 TEST_HEADER include/spdk/conf.h 00:03:31.708 TEST_HEADER include/spdk/config.h 00:03:31.708 TEST_HEADER include/spdk/cpuset.h 00:03:31.708 TEST_HEADER include/spdk/crc16.h 00:03:31.708 CC examples/ioat/perf/perf.o 00:03:31.708 TEST_HEADER include/spdk/crc32.h 00:03:31.708 TEST_HEADER include/spdk/crc64.h 00:03:31.708 TEST_HEADER include/spdk/dif.h 00:03:31.708 TEST_HEADER include/spdk/dma.h 00:03:31.708 TEST_HEADER include/spdk/endian.h 00:03:31.708 TEST_HEADER include/spdk/env_dpdk.h 00:03:31.708 TEST_HEADER include/spdk/env.h 00:03:31.708 TEST_HEADER include/spdk/event.h 00:03:31.708 TEST_HEADER include/spdk/fd_group.h 00:03:31.708 TEST_HEADER include/spdk/fd.h 00:03:31.708 TEST_HEADER include/spdk/file.h 00:03:31.708 TEST_HEADER include/spdk/fsdev.h 00:03:31.708 TEST_HEADER include/spdk/fsdev_module.h 00:03:31.708 TEST_HEADER include/spdk/ftl.h 00:03:31.708 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:31.708 TEST_HEADER include/spdk/gpt_spec.h 00:03:31.708 CC test/app/bdev_svc/bdev_svc.o 00:03:31.708 TEST_HEADER include/spdk/hexlify.h 00:03:31.708 TEST_HEADER include/spdk/histogram_data.h 00:03:31.708 TEST_HEADER include/spdk/idxd.h 00:03:31.708 TEST_HEADER include/spdk/idxd_spec.h 00:03:31.708 TEST_HEADER include/spdk/init.h 00:03:31.708 TEST_HEADER include/spdk/ioat.h 00:03:31.708 TEST_HEADER include/spdk/ioat_spec.h 00:03:31.708 TEST_HEADER include/spdk/iscsi_spec.h 00:03:31.708 TEST_HEADER include/spdk/json.h 00:03:31.708 TEST_HEADER include/spdk/jsonrpc.h 00:03:31.708 TEST_HEADER include/spdk/keyring.h 00:03:31.708 TEST_HEADER include/spdk/keyring_module.h 00:03:31.708 TEST_HEADER include/spdk/likely.h 00:03:31.708 CC test/dma/test_dma/test_dma.o 00:03:31.708 TEST_HEADER include/spdk/log.h 00:03:31.708 TEST_HEADER include/spdk/lvol.h 00:03:31.708 TEST_HEADER include/spdk/md5.h 00:03:31.708 TEST_HEADER include/spdk/memory.h 00:03:31.708 TEST_HEADER include/spdk/mmio.h 00:03:31.708 TEST_HEADER include/spdk/nbd.h 00:03:31.708 TEST_HEADER include/spdk/net.h 00:03:31.708 TEST_HEADER include/spdk/notify.h 00:03:31.708 TEST_HEADER include/spdk/nvme.h 00:03:31.708 TEST_HEADER include/spdk/nvme_intel.h 00:03:31.708 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:31.708 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:31.708 TEST_HEADER include/spdk/nvme_spec.h 00:03:31.708 TEST_HEADER include/spdk/nvme_zns.h 00:03:31.708 CC examples/vmd/lsvmd/lsvmd.o 00:03:31.708 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:31.708 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:31.708 TEST_HEADER include/spdk/nvmf.h 00:03:31.708 TEST_HEADER include/spdk/nvmf_spec.h 00:03:31.708 TEST_HEADER include/spdk/nvmf_transport.h 00:03:31.708 CC examples/vmd/led/led.o 00:03:31.708 TEST_HEADER include/spdk/opal.h 00:03:31.708 TEST_HEADER include/spdk/opal_spec.h 00:03:31.708 TEST_HEADER include/spdk/pci_ids.h 00:03:31.708 TEST_HEADER include/spdk/pipe.h 00:03:31.708 TEST_HEADER include/spdk/queue.h 00:03:31.708 TEST_HEADER include/spdk/reduce.h 00:03:31.708 TEST_HEADER include/spdk/rpc.h 00:03:31.708 TEST_HEADER include/spdk/scheduler.h 00:03:31.708 TEST_HEADER include/spdk/scsi.h 00:03:31.708 TEST_HEADER include/spdk/scsi_spec.h 00:03:31.708 TEST_HEADER include/spdk/sock.h 00:03:31.708 TEST_HEADER include/spdk/stdinc.h 00:03:31.708 TEST_HEADER include/spdk/string.h 00:03:31.708 TEST_HEADER include/spdk/thread.h 00:03:31.708 TEST_HEADER include/spdk/trace.h 00:03:31.708 TEST_HEADER include/spdk/trace_parser.h 00:03:31.708 TEST_HEADER include/spdk/tree.h 00:03:31.708 TEST_HEADER include/spdk/ublk.h 00:03:31.708 TEST_HEADER include/spdk/util.h 00:03:31.708 TEST_HEADER include/spdk/uuid.h 00:03:31.708 TEST_HEADER include/spdk/version.h 00:03:31.708 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:31.708 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:31.708 TEST_HEADER include/spdk/vhost.h 00:03:31.708 TEST_HEADER include/spdk/vmd.h 00:03:31.708 TEST_HEADER include/spdk/xor.h 00:03:31.708 TEST_HEADER include/spdk/zipf.h 00:03:31.708 CXX test/cpp_headers/accel.o 00:03:31.966 LINK bdev_svc 00:03:31.966 LINK lsvmd 00:03:31.966 LINK ioat_perf 00:03:31.966 CC examples/idxd/perf/perf.o 00:03:31.966 LINK led 00:03:31.966 LINK spdk_nvme_identify 00:03:31.966 LINK spdk_nvme_perf 00:03:31.966 CXX test/cpp_headers/accel_module.o 00:03:31.966 CXX test/cpp_headers/assert.o 00:03:32.224 CC examples/ioat/verify/verify.o 00:03:32.224 CXX test/cpp_headers/barrier.o 00:03:32.224 CC test/app/histogram_perf/histogram_perf.o 00:03:32.224 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:32.483 LINK test_dma 00:03:32.483 CC test/app/jsoncat/jsoncat.o 00:03:32.483 CXX test/cpp_headers/base64.o 00:03:32.483 LINK idxd_perf 00:03:32.483 LINK verify 00:03:32.483 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:32.483 LINK histogram_perf 00:03:32.483 LINK jsoncat 00:03:32.740 CXX test/cpp_headers/bdev.o 00:03:32.740 CC test/env/mem_callbacks/mem_callbacks.o 00:03:32.740 CXX test/cpp_headers/bdev_module.o 00:03:32.740 LINK interrupt_tgt 00:03:32.740 CC test/app/stub/stub.o 00:03:32.740 CC test/env/vtophys/vtophys.o 00:03:32.740 CXX test/cpp_headers/bdev_zone.o 00:03:32.999 LINK nvme_fuzz 00:03:32.999 LINK spdk_top 00:03:32.999 CXX test/cpp_headers/bit_array.o 00:03:32.999 CC examples/sock/hello_world/hello_sock.o 00:03:32.999 CXX test/cpp_headers/bit_pool.o 00:03:32.999 CC examples/thread/thread/thread_ex.o 00:03:32.999 LINK vtophys 00:03:32.999 LINK stub 00:03:32.999 CXX test/cpp_headers/blob_bdev.o 00:03:32.999 CXX test/cpp_headers/blobfs_bdev.o 00:03:33.259 CXX test/cpp_headers/blobfs.o 00:03:33.259 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:33.259 CC app/spdk_dd/spdk_dd.o 00:03:33.259 CXX test/cpp_headers/blob.o 00:03:33.259 CXX test/cpp_headers/conf.o 00:03:33.259 LINK hello_sock 00:03:33.259 LINK thread 00:03:33.259 LINK mem_callbacks 00:03:33.259 CXX test/cpp_headers/config.o 00:03:33.517 CXX test/cpp_headers/cpuset.o 00:03:33.517 CXX test/cpp_headers/crc16.o 00:03:33.517 CC app/vhost/vhost.o 00:03:33.517 CC app/fio/nvme/fio_plugin.o 00:03:33.517 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:33.517 CC app/fio/bdev/fio_plugin.o 00:03:33.517 CC test/env/memory/memory_ut.o 00:03:33.774 CXX test/cpp_headers/crc32.o 00:03:33.774 LINK spdk_dd 00:03:33.774 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:33.774 CC examples/nvme/hello_world/hello_world.o 00:03:33.774 LINK vhost 00:03:33.774 LINK env_dpdk_post_init 00:03:33.774 CXX test/cpp_headers/crc64.o 00:03:33.774 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:34.033 CXX test/cpp_headers/dif.o 00:03:34.033 CXX test/cpp_headers/dma.o 00:03:34.033 LINK hello_world 00:03:34.292 CXX test/cpp_headers/endian.o 00:03:34.292 CC examples/accel/perf/accel_perf.o 00:03:34.292 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:34.292 LINK spdk_bdev 00:03:34.292 LINK spdk_nvme 00:03:34.292 CC examples/nvme/reconnect/reconnect.o 00:03:34.292 CC examples/blob/hello_world/hello_blob.o 00:03:34.292 LINK vhost_fuzz 00:03:34.565 CXX test/cpp_headers/env_dpdk.o 00:03:34.565 CC test/env/pci/pci_ut.o 00:03:34.565 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:34.565 CXX test/cpp_headers/env.o 00:03:34.565 LINK hello_fsdev 00:03:34.831 LINK hello_blob 00:03:34.831 CC test/event/event_perf/event_perf.o 00:03:34.831 LINK reconnect 00:03:34.831 CXX test/cpp_headers/event.o 00:03:34.831 LINK accel_perf 00:03:35.089 LINK event_perf 00:03:35.089 CXX test/cpp_headers/fd_group.o 00:03:35.089 LINK pci_ut 00:03:35.089 CC examples/nvme/arbitration/arbitration.o 00:03:35.089 CC test/nvme/aer/aer.o 00:03:35.089 LINK memory_ut 00:03:35.089 CC examples/blob/cli/blobcli.o 00:03:35.089 CXX test/cpp_headers/fd.o 00:03:35.347 CC test/event/reactor/reactor.o 00:03:35.347 CXX test/cpp_headers/file.o 00:03:35.347 CC test/event/reactor_perf/reactor_perf.o 00:03:35.347 LINK nvme_manage 00:03:35.604 LINK reactor 00:03:35.604 LINK aer 00:03:35.604 CC test/event/app_repeat/app_repeat.o 00:03:35.604 CXX test/cpp_headers/fsdev.o 00:03:35.604 LINK arbitration 00:03:35.604 LINK reactor_perf 00:03:35.604 CXX test/cpp_headers/fsdev_module.o 00:03:35.604 CC examples/bdev/hello_world/hello_bdev.o 00:03:35.605 CXX test/cpp_headers/ftl.o 00:03:35.605 LINK iscsi_fuzz 00:03:35.605 LINK app_repeat 00:03:35.863 LINK blobcli 00:03:35.863 CC test/nvme/reset/reset.o 00:03:35.863 CC test/nvme/sgl/sgl.o 00:03:35.863 CC examples/nvme/hotplug/hotplug.o 00:03:35.863 CC test/nvme/e2edp/nvme_dp.o 00:03:35.863 LINK hello_bdev 00:03:35.863 CXX test/cpp_headers/fuse_dispatcher.o 00:03:35.863 CC test/rpc_client/rpc_client_test.o 00:03:35.863 CXX test/cpp_headers/gpt_spec.o 00:03:35.863 CXX test/cpp_headers/hexlify.o 00:03:36.121 CC test/event/scheduler/scheduler.o 00:03:36.121 LINK reset 00:03:36.121 LINK hotplug 00:03:36.121 LINK rpc_client_test 00:03:36.121 LINK sgl 00:03:36.121 CXX test/cpp_headers/histogram_data.o 00:03:36.121 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:36.121 LINK nvme_dp 00:03:36.121 CC test/nvme/overhead/overhead.o 00:03:36.121 CC examples/bdev/bdevperf/bdevperf.o 00:03:36.379 LINK scheduler 00:03:36.379 CXX test/cpp_headers/idxd.o 00:03:36.379 CC examples/nvme/abort/abort.o 00:03:36.379 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:36.379 LINK cmb_copy 00:03:36.637 CC test/accel/dif/dif.o 00:03:36.637 CXX test/cpp_headers/idxd_spec.o 00:03:36.637 CXX test/cpp_headers/init.o 00:03:36.637 LINK overhead 00:03:36.637 LINK pmr_persistence 00:03:36.637 CC test/blobfs/mkfs/mkfs.o 00:03:36.895 CC test/lvol/esnap/esnap.o 00:03:36.895 CC test/nvme/err_injection/err_injection.o 00:03:36.895 CXX test/cpp_headers/ioat.o 00:03:36.895 CXX test/cpp_headers/ioat_spec.o 00:03:36.895 CC test/nvme/startup/startup.o 00:03:36.896 LINK abort 00:03:36.896 LINK mkfs 00:03:36.896 CC test/nvme/reserve/reserve.o 00:03:37.153 CXX test/cpp_headers/iscsi_spec.o 00:03:37.153 LINK err_injection 00:03:37.153 CXX test/cpp_headers/json.o 00:03:37.153 CC test/nvme/simple_copy/simple_copy.o 00:03:37.153 LINK startup 00:03:37.153 LINK reserve 00:03:37.410 CC test/nvme/compliance/nvme_compliance.o 00:03:37.410 CC test/nvme/boot_partition/boot_partition.o 00:03:37.410 CXX test/cpp_headers/jsonrpc.o 00:03:37.410 CC test/nvme/connect_stress/connect_stress.o 00:03:37.410 CC test/nvme/fused_ordering/fused_ordering.o 00:03:37.410 LINK simple_copy 00:03:37.668 LINK boot_partition 00:03:37.668 LINK bdevperf 00:03:37.668 CXX test/cpp_headers/keyring.o 00:03:37.668 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:37.668 LINK connect_stress 00:03:37.668 LINK dif 00:03:37.668 CXX test/cpp_headers/keyring_module.o 00:03:37.668 CC test/nvme/fdp/fdp.o 00:03:37.668 LINK fused_ordering 00:03:37.926 LINK nvme_compliance 00:03:37.926 LINK doorbell_aers 00:03:37.926 CC test/nvme/cuse/cuse.o 00:03:37.926 CXX test/cpp_headers/likely.o 00:03:37.926 CXX test/cpp_headers/log.o 00:03:37.926 CC examples/nvmf/nvmf/nvmf.o 00:03:37.926 CXX test/cpp_headers/lvol.o 00:03:37.926 CXX test/cpp_headers/md5.o 00:03:38.183 CXX test/cpp_headers/memory.o 00:03:38.183 CXX test/cpp_headers/mmio.o 00:03:38.183 CXX test/cpp_headers/nbd.o 00:03:38.183 CXX test/cpp_headers/net.o 00:03:38.183 LINK fdp 00:03:38.183 CXX test/cpp_headers/notify.o 00:03:38.183 CXX test/cpp_headers/nvme.o 00:03:38.440 CXX test/cpp_headers/nvme_intel.o 00:03:38.440 LINK nvmf 00:03:38.440 CC test/bdev/bdevio/bdevio.o 00:03:38.440 CXX test/cpp_headers/nvme_ocssd.o 00:03:38.440 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:38.440 CXX test/cpp_headers/nvme_spec.o 00:03:38.440 CXX test/cpp_headers/nvme_zns.o 00:03:38.440 CXX test/cpp_headers/nvmf_cmd.o 00:03:38.440 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:38.703 CXX test/cpp_headers/nvmf.o 00:03:38.703 CXX test/cpp_headers/nvmf_spec.o 00:03:38.703 CXX test/cpp_headers/nvmf_transport.o 00:03:38.703 CXX test/cpp_headers/opal.o 00:03:38.703 CXX test/cpp_headers/opal_spec.o 00:03:38.703 CXX test/cpp_headers/pci_ids.o 00:03:38.960 CXX test/cpp_headers/pipe.o 00:03:38.960 CXX test/cpp_headers/queue.o 00:03:38.960 LINK bdevio 00:03:38.960 CXX test/cpp_headers/reduce.o 00:03:38.960 CXX test/cpp_headers/rpc.o 00:03:38.960 CXX test/cpp_headers/scheduler.o 00:03:38.960 CXX test/cpp_headers/scsi.o 00:03:38.960 CXX test/cpp_headers/scsi_spec.o 00:03:38.960 CXX test/cpp_headers/sock.o 00:03:38.960 CXX test/cpp_headers/stdinc.o 00:03:39.217 CXX test/cpp_headers/string.o 00:03:39.217 CXX test/cpp_headers/thread.o 00:03:39.217 CXX test/cpp_headers/trace.o 00:03:39.217 CXX test/cpp_headers/trace_parser.o 00:03:39.217 CXX test/cpp_headers/tree.o 00:03:39.217 CXX test/cpp_headers/ublk.o 00:03:39.217 CXX test/cpp_headers/util.o 00:03:39.217 CXX test/cpp_headers/uuid.o 00:03:39.217 CXX test/cpp_headers/version.o 00:03:39.217 CXX test/cpp_headers/vfio_user_pci.o 00:03:39.217 CXX test/cpp_headers/vfio_user_spec.o 00:03:39.217 CXX test/cpp_headers/vhost.o 00:03:39.217 CXX test/cpp_headers/vmd.o 00:03:39.477 CXX test/cpp_headers/xor.o 00:03:39.477 CXX test/cpp_headers/zipf.o 00:03:40.043 LINK cuse 00:03:45.361 LINK esnap 00:03:45.361 00:03:45.361 real 1m49.504s 00:03:45.361 user 9m54.122s 00:03:45.361 sys 1m56.727s 00:03:45.361 18:53:11 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:45.361 18:53:11 make -- common/autotest_common.sh@10 -- $ set +x 00:03:45.361 ************************************ 00:03:45.361 END TEST make 00:03:45.361 ************************************ 00:03:45.361 18:53:11 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:45.361 18:53:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:45.361 18:53:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:45.361 18:53:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.361 18:53:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:45.361 18:53:11 -- pm/common@44 -- $ pid=5405 00:03:45.361 18:53:11 -- pm/common@50 -- $ kill -TERM 5405 00:03:45.361 18:53:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.361 18:53:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:45.361 18:53:11 -- pm/common@44 -- $ pid=5407 00:03:45.361 18:53:11 -- pm/common@50 -- $ kill -TERM 5407 00:03:45.361 18:53:11 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:45.361 18:53:11 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:45.361 18:53:11 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:45.361 18:53:11 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:45.361 18:53:11 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:45.361 18:53:11 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:45.361 18:53:11 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:45.361 18:53:11 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:45.361 18:53:11 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:45.361 18:53:11 -- scripts/common.sh@336 -- # IFS=.-: 00:03:45.361 18:53:11 -- scripts/common.sh@336 -- # read -ra ver1 00:03:45.361 18:53:11 -- scripts/common.sh@337 -- # IFS=.-: 00:03:45.361 18:53:11 -- scripts/common.sh@337 -- # read -ra ver2 00:03:45.361 18:53:11 -- scripts/common.sh@338 -- # local 'op=<' 00:03:45.361 18:53:11 -- scripts/common.sh@340 -- # ver1_l=2 00:03:45.361 18:53:11 -- scripts/common.sh@341 -- # ver2_l=1 00:03:45.361 18:53:11 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:45.361 18:53:11 -- scripts/common.sh@344 -- # case "$op" in 00:03:45.361 18:53:11 -- scripts/common.sh@345 -- # : 1 00:03:45.361 18:53:11 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:45.361 18:53:11 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:45.361 18:53:11 -- scripts/common.sh@365 -- # decimal 1 00:03:45.361 18:53:11 -- scripts/common.sh@353 -- # local d=1 00:03:45.361 18:53:11 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:45.361 18:53:11 -- scripts/common.sh@355 -- # echo 1 00:03:45.361 18:53:11 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:45.361 18:53:11 -- scripts/common.sh@366 -- # decimal 2 00:03:45.361 18:53:11 -- scripts/common.sh@353 -- # local d=2 00:03:45.361 18:53:11 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:45.361 18:53:11 -- scripts/common.sh@355 -- # echo 2 00:03:45.361 18:53:11 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:45.361 18:53:11 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:45.361 18:53:11 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:45.361 18:53:11 -- scripts/common.sh@368 -- # return 0 00:03:45.361 18:53:11 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:45.361 18:53:11 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:45.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.361 --rc genhtml_branch_coverage=1 00:03:45.361 --rc genhtml_function_coverage=1 00:03:45.361 --rc genhtml_legend=1 00:03:45.361 --rc geninfo_all_blocks=1 00:03:45.361 --rc geninfo_unexecuted_blocks=1 00:03:45.361 00:03:45.361 ' 00:03:45.361 18:53:11 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:45.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.361 --rc genhtml_branch_coverage=1 00:03:45.361 --rc genhtml_function_coverage=1 00:03:45.361 --rc genhtml_legend=1 00:03:45.361 --rc geninfo_all_blocks=1 00:03:45.361 --rc geninfo_unexecuted_blocks=1 00:03:45.361 00:03:45.361 ' 00:03:45.361 18:53:11 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:45.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.361 --rc genhtml_branch_coverage=1 00:03:45.361 --rc genhtml_function_coverage=1 00:03:45.361 --rc genhtml_legend=1 00:03:45.361 --rc geninfo_all_blocks=1 00:03:45.361 --rc geninfo_unexecuted_blocks=1 00:03:45.361 00:03:45.361 ' 00:03:45.361 18:53:11 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:45.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.361 --rc genhtml_branch_coverage=1 00:03:45.361 --rc genhtml_function_coverage=1 00:03:45.361 --rc genhtml_legend=1 00:03:45.361 --rc geninfo_all_blocks=1 00:03:45.361 --rc geninfo_unexecuted_blocks=1 00:03:45.361 00:03:45.361 ' 00:03:45.361 18:53:11 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:45.620 18:53:11 -- nvmf/common.sh@7 -- # uname -s 00:03:45.620 18:53:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:45.620 18:53:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:45.620 18:53:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:45.620 18:53:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:45.620 18:53:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:45.620 18:53:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:45.620 18:53:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:45.620 18:53:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:45.620 18:53:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:45.620 18:53:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:45.620 18:53:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e5fe3b7-19be-4379-823f-d85818d43e03 00:03:45.620 18:53:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=2e5fe3b7-19be-4379-823f-d85818d43e03 00:03:45.620 18:53:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:45.620 18:53:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:45.620 18:53:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:45.620 18:53:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:45.620 18:53:12 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:45.620 18:53:12 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:45.620 18:53:12 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:45.620 18:53:12 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:45.620 18:53:12 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:45.620 18:53:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.620 18:53:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.620 18:53:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.620 18:53:12 -- paths/export.sh@5 -- # export PATH 00:03:45.620 18:53:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.620 18:53:12 -- nvmf/common.sh@51 -- # : 0 00:03:45.620 18:53:12 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:45.620 18:53:12 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:45.620 18:53:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:45.620 18:53:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:45.620 18:53:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:45.620 18:53:12 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:45.620 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:45.620 18:53:12 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:45.620 18:53:12 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:45.620 18:53:12 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:45.620 18:53:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:45.620 18:53:12 -- spdk/autotest.sh@32 -- # uname -s 00:03:45.620 18:53:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:45.620 18:53:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:45.620 18:53:12 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:45.620 18:53:12 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:45.620 18:53:12 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:45.620 18:53:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:45.620 18:53:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:45.620 18:53:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:45.620 18:53:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:45.620 18:53:12 -- spdk/autotest.sh@48 -- # udevadm_pid=54574 00:03:45.620 18:53:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:45.620 18:53:12 -- pm/common@17 -- # local monitor 00:03:45.620 18:53:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.620 18:53:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.620 18:53:12 -- pm/common@21 -- # date +%s 00:03:45.620 18:53:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732647192 00:03:45.621 18:53:12 -- pm/common@25 -- # sleep 1 00:03:45.621 18:53:12 -- pm/common@21 -- # date +%s 00:03:45.621 18:53:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732647192 00:03:45.621 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732647192_collect-cpu-load.pm.log 00:03:45.621 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732647192_collect-vmstat.pm.log 00:03:46.556 18:53:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:46.556 18:53:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:46.556 18:53:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:46.556 18:53:13 -- common/autotest_common.sh@10 -- # set +x 00:03:46.556 18:53:13 -- spdk/autotest.sh@59 -- # create_test_list 00:03:46.556 18:53:13 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:46.556 18:53:13 -- common/autotest_common.sh@10 -- # set +x 00:03:46.556 18:53:13 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:46.556 18:53:13 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:46.556 18:53:13 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:46.556 18:53:13 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:46.556 18:53:13 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:46.556 18:53:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:46.556 18:53:13 -- common/autotest_common.sh@1457 -- # uname 00:03:46.556 18:53:13 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:46.556 18:53:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:46.556 18:53:13 -- common/autotest_common.sh@1477 -- # uname 00:03:46.556 18:53:13 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:46.556 18:53:13 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:46.556 18:53:13 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:46.814 lcov: LCOV version 1.15 00:03:46.814 18:53:13 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:05.007 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:05.007 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:23.122 18:53:48 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:23.122 18:53:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.122 18:53:48 -- common/autotest_common.sh@10 -- # set +x 00:04:23.122 18:53:48 -- spdk/autotest.sh@78 -- # rm -f 00:04:23.122 18:53:48 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:23.122 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.122 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:23.122 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:23.122 18:53:48 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:23.122 18:53:48 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:23.122 18:53:48 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:23.122 18:53:48 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:23.122 18:53:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:23.122 18:53:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:23.122 18:53:48 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:23.122 18:53:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:23.122 18:53:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:23.122 18:53:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:23.122 18:53:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:23.122 18:53:48 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:23.122 18:53:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:23.122 18:53:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:23.122 18:53:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:23.122 18:53:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:23.122 18:53:48 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:23.122 18:53:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:23.122 18:53:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:23.122 18:53:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:23.122 18:53:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:23.122 18:53:48 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:23.122 18:53:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:23.122 18:53:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:23.122 18:53:48 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:23.122 18:53:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.122 18:53:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:23.122 18:53:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:23.122 18:53:48 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:23.122 18:53:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:23.122 No valid GPT data, bailing 00:04:23.122 18:53:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:23.122 18:53:48 -- scripts/common.sh@394 -- # pt= 00:04:23.123 18:53:48 -- scripts/common.sh@395 -- # return 1 00:04:23.123 18:53:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:23.123 1+0 records in 00:04:23.123 1+0 records out 00:04:23.123 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513979 s, 204 MB/s 00:04:23.123 18:53:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.123 18:53:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:23.123 18:53:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:23.123 18:53:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:23.123 18:53:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:23.123 No valid GPT data, bailing 00:04:23.123 18:53:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:23.123 18:53:49 -- scripts/common.sh@394 -- # pt= 00:04:23.123 18:53:49 -- scripts/common.sh@395 -- # return 1 00:04:23.123 18:53:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:23.123 1+0 records in 00:04:23.123 1+0 records out 00:04:23.123 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511131 s, 205 MB/s 00:04:23.123 18:53:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.123 18:53:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:23.123 18:53:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:23.123 18:53:49 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:23.123 18:53:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:23.123 No valid GPT data, bailing 00:04:23.123 18:53:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:23.123 18:53:49 -- scripts/common.sh@394 -- # pt= 00:04:23.123 18:53:49 -- scripts/common.sh@395 -- # return 1 00:04:23.123 18:53:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:23.123 1+0 records in 00:04:23.123 1+0 records out 00:04:23.123 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0043265 s, 242 MB/s 00:04:23.123 18:53:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.123 18:53:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:23.123 18:53:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:23.123 18:53:49 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:23.123 18:53:49 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:23.123 No valid GPT data, bailing 00:04:23.123 18:53:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:23.123 18:53:49 -- scripts/common.sh@394 -- # pt= 00:04:23.123 18:53:49 -- scripts/common.sh@395 -- # return 1 00:04:23.123 18:53:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:23.123 1+0 records in 00:04:23.123 1+0 records out 00:04:23.123 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0034379 s, 305 MB/s 00:04:23.123 18:53:49 -- spdk/autotest.sh@105 -- # sync 00:04:23.123 18:53:49 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:23.123 18:53:49 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:23.123 18:53:49 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:25.023 18:53:51 -- spdk/autotest.sh@111 -- # uname -s 00:04:25.023 18:53:51 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:25.023 18:53:51 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:25.023 18:53:51 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:25.588 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.588 Hugepages 00:04:25.588 node hugesize free / total 00:04:25.588 node0 1048576kB 0 / 0 00:04:25.588 node0 2048kB 0 / 0 00:04:25.588 00:04:25.588 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:25.588 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:25.588 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:25.588 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:25.588 18:53:52 -- spdk/autotest.sh@117 -- # uname -s 00:04:25.588 18:53:52 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:25.588 18:53:52 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:25.588 18:53:52 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:26.522 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.522 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.522 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.522 18:53:53 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:27.895 18:53:54 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:27.895 18:53:54 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:27.895 18:53:54 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:27.895 18:53:54 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:27.895 18:53:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:27.895 18:53:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:27.895 18:53:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:27.895 18:53:54 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:27.895 18:53:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:27.895 18:53:54 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:27.895 18:53:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:27.895 18:53:54 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:27.895 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.152 Waiting for block devices as requested 00:04:28.152 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:28.152 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:28.152 18:53:54 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:28.152 18:53:54 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:28.152 18:53:54 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:28.152 18:53:54 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:28.152 18:53:54 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:28.152 18:53:54 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:28.152 18:53:54 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:28.152 18:53:54 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:28.152 18:53:54 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:28.152 18:53:54 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:28.410 18:53:54 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:28.410 18:53:54 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:28.410 18:53:54 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:28.410 18:53:54 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:28.410 18:53:54 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:28.410 18:53:54 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:28.410 18:53:54 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:28.410 18:53:54 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:28.410 18:53:54 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:28.410 18:53:54 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:28.410 18:53:54 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:28.410 18:53:54 -- common/autotest_common.sh@1543 -- # continue 00:04:28.410 18:53:54 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:28.410 18:53:54 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:28.410 18:53:54 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:28.410 18:53:54 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:28.410 18:53:54 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:28.410 18:53:54 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:28.410 18:53:54 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:28.410 18:53:54 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:28.410 18:53:54 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:28.410 18:53:54 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:28.410 18:53:54 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:28.410 18:53:54 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:28.410 18:53:54 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:28.410 18:53:54 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:28.410 18:53:54 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:28.410 18:53:54 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:28.410 18:53:54 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:28.410 18:53:54 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:28.410 18:53:54 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:28.410 18:53:54 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:28.410 18:53:54 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:28.410 18:53:54 -- common/autotest_common.sh@1543 -- # continue 00:04:28.410 18:53:54 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:28.410 18:53:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:28.410 18:53:54 -- common/autotest_common.sh@10 -- # set +x 00:04:28.410 18:53:54 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:28.410 18:53:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.410 18:53:54 -- common/autotest_common.sh@10 -- # set +x 00:04:28.410 18:53:54 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:28.975 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.233 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.233 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.233 18:53:55 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:29.233 18:53:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:29.233 18:53:55 -- common/autotest_common.sh@10 -- # set +x 00:04:29.233 18:53:55 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:29.233 18:53:55 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:29.233 18:53:55 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:29.233 18:53:55 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:29.233 18:53:55 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:29.233 18:53:55 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:29.233 18:53:55 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:29.233 18:53:55 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:29.233 18:53:55 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:29.233 18:53:55 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:29.233 18:53:55 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:29.233 18:53:55 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:29.233 18:53:55 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:29.233 18:53:55 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:29.233 18:53:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:29.233 18:53:55 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:29.233 18:53:55 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:29.233 18:53:55 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:29.233 18:53:55 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:29.233 18:53:55 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:29.233 18:53:55 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:29.233 18:53:55 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:29.233 18:53:55 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:29.233 18:53:55 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:29.233 18:53:55 -- common/autotest_common.sh@1572 -- # return 0 00:04:29.233 18:53:55 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:29.233 18:53:55 -- common/autotest_common.sh@1580 -- # return 0 00:04:29.233 18:53:55 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:29.233 18:53:55 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:29.233 18:53:55 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:29.233 18:53:55 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:29.233 18:53:55 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:29.233 18:53:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.233 18:53:55 -- common/autotest_common.sh@10 -- # set +x 00:04:29.233 18:53:55 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:29.233 18:53:55 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:29.233 18:53:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.233 18:53:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.233 18:53:55 -- common/autotest_common.sh@10 -- # set +x 00:04:29.233 ************************************ 00:04:29.233 START TEST env 00:04:29.233 ************************************ 00:04:29.233 18:53:55 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:29.491 * Looking for test storage... 00:04:29.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:29.491 18:53:55 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:29.491 18:53:55 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:29.491 18:53:55 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:29.491 18:53:56 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:29.491 18:53:56 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.491 18:53:56 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.491 18:53:56 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.491 18:53:56 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.491 18:53:56 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.491 18:53:56 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.491 18:53:56 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.491 18:53:56 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.491 18:53:56 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.491 18:53:56 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.491 18:53:56 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.491 18:53:56 env -- scripts/common.sh@344 -- # case "$op" in 00:04:29.491 18:53:56 env -- scripts/common.sh@345 -- # : 1 00:04:29.491 18:53:56 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.491 18:53:56 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.491 18:53:56 env -- scripts/common.sh@365 -- # decimal 1 00:04:29.491 18:53:56 env -- scripts/common.sh@353 -- # local d=1 00:04:29.491 18:53:56 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.491 18:53:56 env -- scripts/common.sh@355 -- # echo 1 00:04:29.491 18:53:56 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.491 18:53:56 env -- scripts/common.sh@366 -- # decimal 2 00:04:29.491 18:53:56 env -- scripts/common.sh@353 -- # local d=2 00:04:29.491 18:53:56 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.491 18:53:56 env -- scripts/common.sh@355 -- # echo 2 00:04:29.491 18:53:56 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.491 18:53:56 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.491 18:53:56 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.491 18:53:56 env -- scripts/common.sh@368 -- # return 0 00:04:29.491 18:53:56 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.491 18:53:56 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:29.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.491 --rc genhtml_branch_coverage=1 00:04:29.491 --rc genhtml_function_coverage=1 00:04:29.491 --rc genhtml_legend=1 00:04:29.491 --rc geninfo_all_blocks=1 00:04:29.491 --rc geninfo_unexecuted_blocks=1 00:04:29.491 00:04:29.491 ' 00:04:29.491 18:53:56 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:29.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.491 --rc genhtml_branch_coverage=1 00:04:29.491 --rc genhtml_function_coverage=1 00:04:29.491 --rc genhtml_legend=1 00:04:29.491 --rc geninfo_all_blocks=1 00:04:29.492 --rc geninfo_unexecuted_blocks=1 00:04:29.492 00:04:29.492 ' 00:04:29.492 18:53:56 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:29.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.492 --rc genhtml_branch_coverage=1 00:04:29.492 --rc genhtml_function_coverage=1 00:04:29.492 --rc genhtml_legend=1 00:04:29.492 --rc geninfo_all_blocks=1 00:04:29.492 --rc geninfo_unexecuted_blocks=1 00:04:29.492 00:04:29.492 ' 00:04:29.492 18:53:56 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:29.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.492 --rc genhtml_branch_coverage=1 00:04:29.492 --rc genhtml_function_coverage=1 00:04:29.492 --rc genhtml_legend=1 00:04:29.492 --rc geninfo_all_blocks=1 00:04:29.492 --rc geninfo_unexecuted_blocks=1 00:04:29.492 00:04:29.492 ' 00:04:29.492 18:53:56 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:29.492 18:53:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.492 18:53:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.492 18:53:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.492 ************************************ 00:04:29.492 START TEST env_memory 00:04:29.492 ************************************ 00:04:29.492 18:53:56 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:29.492 00:04:29.492 00:04:29.492 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.492 http://cunit.sourceforge.net/ 00:04:29.492 00:04:29.492 00:04:29.492 Suite: memory 00:04:29.750 Test: alloc and free memory map ...[2024-11-26 18:53:56.181824] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:29.750 passed 00:04:29.750 Test: mem map translation ...[2024-11-26 18:53:56.260111] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:29.750 [2024-11-26 18:53:56.260217] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:29.750 [2024-11-26 18:53:56.260337] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:29.750 [2024-11-26 18:53:56.260375] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:29.750 passed 00:04:29.750 Test: mem map registration ...[2024-11-26 18:53:56.359325] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:29.750 [2024-11-26 18:53:56.359454] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:30.008 passed 00:04:30.008 Test: mem map adjacent registrations ...passed 00:04:30.008 00:04:30.008 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.008 suites 1 1 n/a 0 0 00:04:30.008 tests 4 4 4 0 0 00:04:30.008 asserts 152 152 152 0 n/a 00:04:30.008 00:04:30.008 Elapsed time = 0.385 seconds 00:04:30.008 00:04:30.008 real 0m0.431s 00:04:30.008 user 0m0.389s 00:04:30.008 sys 0m0.031s 00:04:30.008 18:53:56 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.008 18:53:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:30.008 ************************************ 00:04:30.008 END TEST env_memory 00:04:30.008 ************************************ 00:04:30.008 18:53:56 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:30.008 18:53:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.008 18:53:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.008 18:53:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.008 ************************************ 00:04:30.008 START TEST env_vtophys 00:04:30.008 ************************************ 00:04:30.008 18:53:56 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:30.008 EAL: lib.eal log level changed from notice to debug 00:04:30.008 EAL: Detected lcore 0 as core 0 on socket 0 00:04:30.008 EAL: Detected lcore 1 as core 0 on socket 0 00:04:30.008 EAL: Detected lcore 2 as core 0 on socket 0 00:04:30.008 EAL: Detected lcore 3 as core 0 on socket 0 00:04:30.008 EAL: Detected lcore 4 as core 0 on socket 0 00:04:30.008 EAL: Detected lcore 5 as core 0 on socket 0 00:04:30.008 EAL: Detected lcore 6 as core 0 on socket 0 00:04:30.008 EAL: Detected lcore 7 as core 0 on socket 0 00:04:30.008 EAL: Detected lcore 8 as core 0 on socket 0 00:04:30.008 EAL: Detected lcore 9 as core 0 on socket 0 00:04:30.265 EAL: Maximum logical cores by configuration: 128 00:04:30.265 EAL: Detected CPU lcores: 10 00:04:30.265 EAL: Detected NUMA nodes: 1 00:04:30.265 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:30.265 EAL: Detected shared linkage of DPDK 00:04:30.265 EAL: No shared files mode enabled, IPC will be disabled 00:04:30.265 EAL: Selected IOVA mode 'PA' 00:04:30.265 EAL: Probing VFIO support... 00:04:30.265 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:30.265 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:30.265 EAL: Ask a virtual area of 0x2e000 bytes 00:04:30.265 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:30.265 EAL: Setting up physically contiguous memory... 00:04:30.265 EAL: Setting maximum number of open files to 524288 00:04:30.265 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:30.265 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:30.265 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.265 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:30.265 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.265 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.265 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:30.265 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:30.265 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.265 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:30.265 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.265 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.265 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:30.265 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:30.265 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.265 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:30.265 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.265 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.265 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:30.265 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:30.265 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.265 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:30.265 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.265 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.265 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:30.265 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:30.265 EAL: Hugepages will be freed exactly as allocated. 00:04:30.265 EAL: No shared files mode enabled, IPC is disabled 00:04:30.265 EAL: No shared files mode enabled, IPC is disabled 00:04:30.265 EAL: TSC frequency is ~2200000 KHz 00:04:30.265 EAL: Main lcore 0 is ready (tid=7fcf4ecd6a40;cpuset=[0]) 00:04:30.265 EAL: Trying to obtain current memory policy. 00:04:30.265 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.265 EAL: Restoring previous memory policy: 0 00:04:30.265 EAL: request: mp_malloc_sync 00:04:30.265 EAL: No shared files mode enabled, IPC is disabled 00:04:30.265 EAL: Heap on socket 0 was expanded by 2MB 00:04:30.265 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:30.265 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:30.265 EAL: Mem event callback 'spdk:(nil)' registered 00:04:30.265 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:30.265 00:04:30.265 00:04:30.265 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.265 http://cunit.sourceforge.net/ 00:04:30.265 00:04:30.265 00:04:30.265 Suite: components_suite 00:04:30.832 Test: vtophys_malloc_test ...passed 00:04:30.833 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:30.833 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.833 EAL: Restoring previous memory policy: 4 00:04:30.833 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.833 EAL: request: mp_malloc_sync 00:04:30.833 EAL: No shared files mode enabled, IPC is disabled 00:04:30.833 EAL: Heap on socket 0 was expanded by 4MB 00:04:30.833 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.833 EAL: request: mp_malloc_sync 00:04:30.833 EAL: No shared files mode enabled, IPC is disabled 00:04:30.833 EAL: Heap on socket 0 was shrunk by 4MB 00:04:30.833 EAL: Trying to obtain current memory policy. 00:04:30.833 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.833 EAL: Restoring previous memory policy: 4 00:04:30.833 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.833 EAL: request: mp_malloc_sync 00:04:30.833 EAL: No shared files mode enabled, IPC is disabled 00:04:30.833 EAL: Heap on socket 0 was expanded by 6MB 00:04:30.833 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.833 EAL: request: mp_malloc_sync 00:04:30.833 EAL: No shared files mode enabled, IPC is disabled 00:04:30.833 EAL: Heap on socket 0 was shrunk by 6MB 00:04:30.833 EAL: Trying to obtain current memory policy. 00:04:30.833 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.833 EAL: Restoring previous memory policy: 4 00:04:30.833 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.833 EAL: request: mp_malloc_sync 00:04:30.833 EAL: No shared files mode enabled, IPC is disabled 00:04:30.833 EAL: Heap on socket 0 was expanded by 10MB 00:04:30.833 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.833 EAL: request: mp_malloc_sync 00:04:30.833 EAL: No shared files mode enabled, IPC is disabled 00:04:30.833 EAL: Heap on socket 0 was shrunk by 10MB 00:04:30.833 EAL: Trying to obtain current memory policy. 00:04:30.833 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.833 EAL: Restoring previous memory policy: 4 00:04:30.833 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.833 EAL: request: mp_malloc_sync 00:04:30.833 EAL: No shared files mode enabled, IPC is disabled 00:04:30.833 EAL: Heap on socket 0 was expanded by 18MB 00:04:30.833 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.833 EAL: request: mp_malloc_sync 00:04:30.833 EAL: No shared files mode enabled, IPC is disabled 00:04:30.833 EAL: Heap on socket 0 was shrunk by 18MB 00:04:30.833 EAL: Trying to obtain current memory policy. 00:04:30.833 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.833 EAL: Restoring previous memory policy: 4 00:04:30.833 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.833 EAL: request: mp_malloc_sync 00:04:30.833 EAL: No shared files mode enabled, IPC is disabled 00:04:30.833 EAL: Heap on socket 0 was expanded by 34MB 00:04:31.092 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.092 EAL: request: mp_malloc_sync 00:04:31.092 EAL: No shared files mode enabled, IPC is disabled 00:04:31.092 EAL: Heap on socket 0 was shrunk by 34MB 00:04:31.092 EAL: Trying to obtain current memory policy. 00:04:31.092 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.092 EAL: Restoring previous memory policy: 4 00:04:31.092 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.092 EAL: request: mp_malloc_sync 00:04:31.092 EAL: No shared files mode enabled, IPC is disabled 00:04:31.092 EAL: Heap on socket 0 was expanded by 66MB 00:04:31.092 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.092 EAL: request: mp_malloc_sync 00:04:31.092 EAL: No shared files mode enabled, IPC is disabled 00:04:31.092 EAL: Heap on socket 0 was shrunk by 66MB 00:04:31.350 EAL: Trying to obtain current memory policy. 00:04:31.350 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.351 EAL: Restoring previous memory policy: 4 00:04:31.351 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.351 EAL: request: mp_malloc_sync 00:04:31.351 EAL: No shared files mode enabled, IPC is disabled 00:04:31.351 EAL: Heap on socket 0 was expanded by 130MB 00:04:31.609 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.609 EAL: request: mp_malloc_sync 00:04:31.609 EAL: No shared files mode enabled, IPC is disabled 00:04:31.609 EAL: Heap on socket 0 was shrunk by 130MB 00:04:31.609 EAL: Trying to obtain current memory policy. 00:04:31.609 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.868 EAL: Restoring previous memory policy: 4 00:04:31.868 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.868 EAL: request: mp_malloc_sync 00:04:31.868 EAL: No shared files mode enabled, IPC is disabled 00:04:31.868 EAL: Heap on socket 0 was expanded by 258MB 00:04:32.125 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.383 EAL: request: mp_malloc_sync 00:04:32.383 EAL: No shared files mode enabled, IPC is disabled 00:04:32.383 EAL: Heap on socket 0 was shrunk by 258MB 00:04:32.641 EAL: Trying to obtain current memory policy. 00:04:32.641 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.897 EAL: Restoring previous memory policy: 4 00:04:32.897 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.897 EAL: request: mp_malloc_sync 00:04:32.897 EAL: No shared files mode enabled, IPC is disabled 00:04:32.897 EAL: Heap on socket 0 was expanded by 514MB 00:04:33.864 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.864 EAL: request: mp_malloc_sync 00:04:33.864 EAL: No shared files mode enabled, IPC is disabled 00:04:33.864 EAL: Heap on socket 0 was shrunk by 514MB 00:04:34.801 EAL: Trying to obtain current memory policy. 00:04:34.801 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.059 EAL: Restoring previous memory policy: 4 00:04:35.059 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.059 EAL: request: mp_malloc_sync 00:04:35.059 EAL: No shared files mode enabled, IPC is disabled 00:04:35.059 EAL: Heap on socket 0 was expanded by 1026MB 00:04:36.957 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.216 EAL: request: mp_malloc_sync 00:04:37.216 EAL: No shared files mode enabled, IPC is disabled 00:04:37.216 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:38.591 passed 00:04:38.591 00:04:38.591 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.591 suites 1 1 n/a 0 0 00:04:38.591 tests 2 2 2 0 0 00:04:38.591 asserts 5677 5677 5677 0 n/a 00:04:38.591 00:04:38.591 Elapsed time = 8.184 seconds 00:04:38.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.591 EAL: request: mp_malloc_sync 00:04:38.591 EAL: No shared files mode enabled, IPC is disabled 00:04:38.591 EAL: Heap on socket 0 was shrunk by 2MB 00:04:38.591 EAL: No shared files mode enabled, IPC is disabled 00:04:38.591 EAL: No shared files mode enabled, IPC is disabled 00:04:38.591 EAL: No shared files mode enabled, IPC is disabled 00:04:38.591 00:04:38.591 real 0m8.552s 00:04:38.591 user 0m7.150s 00:04:38.591 sys 0m1.232s 00:04:38.591 18:54:05 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.591 18:54:05 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:38.591 ************************************ 00:04:38.591 END TEST env_vtophys 00:04:38.591 ************************************ 00:04:38.591 18:54:05 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:38.591 18:54:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.591 18:54:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.591 18:54:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.591 ************************************ 00:04:38.591 START TEST env_pci 00:04:38.591 ************************************ 00:04:38.591 18:54:05 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:38.591 00:04:38.591 00:04:38.591 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.591 http://cunit.sourceforge.net/ 00:04:38.591 00:04:38.591 00:04:38.591 Suite: pci 00:04:38.591 Test: pci_hook ...[2024-11-26 18:54:05.199631] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56914 has claimed it 00:04:38.850 passed 00:04:38.850 00:04:38.850 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.850 suites 1 1 n/a 0 0 00:04:38.850 tests 1 1 1 0 0 00:04:38.850 asserts 25 25 25 0 n/a 00:04:38.850 00:04:38.850 Elapsed time = 0.008 seconds 00:04:38.850 EAL: Cannot find device (10000:00:01.0) 00:04:38.850 EAL: Failed to attach device on primary process 00:04:38.850 00:04:38.850 real 0m0.088s 00:04:38.850 user 0m0.050s 00:04:38.850 sys 0m0.036s 00:04:38.850 18:54:05 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.850 18:54:05 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:38.850 ************************************ 00:04:38.850 END TEST env_pci 00:04:38.850 ************************************ 00:04:38.850 18:54:05 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:38.850 18:54:05 env -- env/env.sh@15 -- # uname 00:04:38.850 18:54:05 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:38.850 18:54:05 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:38.850 18:54:05 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:38.850 18:54:05 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:38.850 18:54:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.850 18:54:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.850 ************************************ 00:04:38.850 START TEST env_dpdk_post_init 00:04:38.850 ************************************ 00:04:38.850 18:54:05 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:38.850 EAL: Detected CPU lcores: 10 00:04:38.850 EAL: Detected NUMA nodes: 1 00:04:38.850 EAL: Detected shared linkage of DPDK 00:04:38.850 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:38.850 EAL: Selected IOVA mode 'PA' 00:04:39.109 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:39.109 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:39.109 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:39.109 Starting DPDK initialization... 00:04:39.109 Starting SPDK post initialization... 00:04:39.109 SPDK NVMe probe 00:04:39.109 Attaching to 0000:00:10.0 00:04:39.109 Attaching to 0000:00:11.0 00:04:39.109 Attached to 0000:00:10.0 00:04:39.109 Attached to 0000:00:11.0 00:04:39.109 Cleaning up... 00:04:39.109 00:04:39.109 real 0m0.295s 00:04:39.109 user 0m0.097s 00:04:39.109 sys 0m0.100s 00:04:39.109 18:54:05 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.109 18:54:05 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:39.109 ************************************ 00:04:39.109 END TEST env_dpdk_post_init 00:04:39.109 ************************************ 00:04:39.109 18:54:05 env -- env/env.sh@26 -- # uname 00:04:39.109 18:54:05 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:39.109 18:54:05 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:39.109 18:54:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.109 18:54:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.109 18:54:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.109 ************************************ 00:04:39.109 START TEST env_mem_callbacks 00:04:39.109 ************************************ 00:04:39.109 18:54:05 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:39.109 EAL: Detected CPU lcores: 10 00:04:39.109 EAL: Detected NUMA nodes: 1 00:04:39.109 EAL: Detected shared linkage of DPDK 00:04:39.384 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:39.384 EAL: Selected IOVA mode 'PA' 00:04:39.384 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:39.384 00:04:39.384 00:04:39.384 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.384 http://cunit.sourceforge.net/ 00:04:39.384 00:04:39.384 00:04:39.384 Suite: memory 00:04:39.384 Test: test ... 00:04:39.384 register 0x200000200000 2097152 00:04:39.384 malloc 3145728 00:04:39.384 register 0x200000400000 4194304 00:04:39.384 buf 0x2000004fffc0 len 3145728 PASSED 00:04:39.384 malloc 64 00:04:39.384 buf 0x2000004ffec0 len 64 PASSED 00:04:39.384 malloc 4194304 00:04:39.384 register 0x200000800000 6291456 00:04:39.384 buf 0x2000009fffc0 len 4194304 PASSED 00:04:39.384 free 0x2000004fffc0 3145728 00:04:39.384 free 0x2000004ffec0 64 00:04:39.384 unregister 0x200000400000 4194304 PASSED 00:04:39.384 free 0x2000009fffc0 4194304 00:04:39.384 unregister 0x200000800000 6291456 PASSED 00:04:39.384 malloc 8388608 00:04:39.384 register 0x200000400000 10485760 00:04:39.384 buf 0x2000005fffc0 len 8388608 PASSED 00:04:39.384 free 0x2000005fffc0 8388608 00:04:39.384 unregister 0x200000400000 10485760 PASSED 00:04:39.384 passed 00:04:39.384 00:04:39.384 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.384 suites 1 1 n/a 0 0 00:04:39.384 tests 1 1 1 0 0 00:04:39.384 asserts 15 15 15 0 n/a 00:04:39.384 00:04:39.384 Elapsed time = 0.064 seconds 00:04:39.384 00:04:39.384 real 0m0.277s 00:04:39.384 user 0m0.100s 00:04:39.384 sys 0m0.074s 00:04:39.384 18:54:05 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.384 18:54:05 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:39.384 ************************************ 00:04:39.384 END TEST env_mem_callbacks 00:04:39.384 ************************************ 00:04:39.384 00:04:39.384 real 0m10.116s 00:04:39.384 user 0m8.011s 00:04:39.384 sys 0m1.714s 00:04:39.384 18:54:05 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.384 ************************************ 00:04:39.384 END TEST env 00:04:39.384 ************************************ 00:04:39.384 18:54:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.685 18:54:06 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:39.685 18:54:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.685 18:54:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.685 18:54:06 -- common/autotest_common.sh@10 -- # set +x 00:04:39.685 ************************************ 00:04:39.685 START TEST rpc 00:04:39.685 ************************************ 00:04:39.685 18:54:06 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:39.685 * Looking for test storage... 00:04:39.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:39.685 18:54:06 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.685 18:54:06 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.685 18:54:06 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.685 18:54:06 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.685 18:54:06 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.685 18:54:06 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.685 18:54:06 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.685 18:54:06 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.685 18:54:06 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.685 18:54:06 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.685 18:54:06 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.685 18:54:06 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.685 18:54:06 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.685 18:54:06 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.685 18:54:06 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.685 18:54:06 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:39.685 18:54:06 rpc -- scripts/common.sh@345 -- # : 1 00:04:39.685 18:54:06 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.685 18:54:06 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.685 18:54:06 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:39.685 18:54:06 rpc -- scripts/common.sh@353 -- # local d=1 00:04:39.685 18:54:06 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.685 18:54:06 rpc -- scripts/common.sh@355 -- # echo 1 00:04:39.685 18:54:06 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.685 18:54:06 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:39.685 18:54:06 rpc -- scripts/common.sh@353 -- # local d=2 00:04:39.685 18:54:06 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.685 18:54:06 rpc -- scripts/common.sh@355 -- # echo 2 00:04:39.685 18:54:06 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.685 18:54:06 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.685 18:54:06 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.685 18:54:06 rpc -- scripts/common.sh@368 -- # return 0 00:04:39.685 18:54:06 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.685 18:54:06 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.685 --rc genhtml_branch_coverage=1 00:04:39.685 --rc genhtml_function_coverage=1 00:04:39.685 --rc genhtml_legend=1 00:04:39.685 --rc geninfo_all_blocks=1 00:04:39.685 --rc geninfo_unexecuted_blocks=1 00:04:39.685 00:04:39.685 ' 00:04:39.685 18:54:06 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.685 --rc genhtml_branch_coverage=1 00:04:39.685 --rc genhtml_function_coverage=1 00:04:39.685 --rc genhtml_legend=1 00:04:39.685 --rc geninfo_all_blocks=1 00:04:39.685 --rc geninfo_unexecuted_blocks=1 00:04:39.685 00:04:39.685 ' 00:04:39.685 18:54:06 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.685 --rc genhtml_branch_coverage=1 00:04:39.685 --rc genhtml_function_coverage=1 00:04:39.685 --rc genhtml_legend=1 00:04:39.685 --rc geninfo_all_blocks=1 00:04:39.685 --rc geninfo_unexecuted_blocks=1 00:04:39.685 00:04:39.685 ' 00:04:39.685 18:54:06 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.685 --rc genhtml_branch_coverage=1 00:04:39.685 --rc genhtml_function_coverage=1 00:04:39.685 --rc genhtml_legend=1 00:04:39.685 --rc geninfo_all_blocks=1 00:04:39.685 --rc geninfo_unexecuted_blocks=1 00:04:39.685 00:04:39.685 ' 00:04:39.685 18:54:06 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57042 00:04:39.685 18:54:06 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.685 18:54:06 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:39.685 18:54:06 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57042 00:04:39.685 18:54:06 rpc -- common/autotest_common.sh@835 -- # '[' -z 57042 ']' 00:04:39.685 18:54:06 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.685 18:54:06 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.685 18:54:06 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.685 18:54:06 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.685 18:54:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.947 [2024-11-26 18:54:06.345959] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:04:39.947 [2024-11-26 18:54:06.346148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57042 ] 00:04:39.947 [2024-11-26 18:54:06.540760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.206 [2024-11-26 18:54:06.715195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:40.206 [2024-11-26 18:54:06.715347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57042' to capture a snapshot of events at runtime. 00:04:40.206 [2024-11-26 18:54:06.715383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:40.206 [2024-11-26 18:54:06.715415] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:40.206 [2024-11-26 18:54:06.715437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57042 for offline analysis/debug. 00:04:40.206 [2024-11-26 18:54:06.717225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.143 18:54:07 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.143 18:54:07 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:41.143 18:54:07 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:41.143 18:54:07 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:41.143 18:54:07 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:41.143 18:54:07 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:41.143 18:54:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.143 18:54:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.143 18:54:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.143 ************************************ 00:04:41.143 START TEST rpc_integrity 00:04:41.143 ************************************ 00:04:41.143 18:54:07 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:41.143 18:54:07 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:41.143 18:54:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.143 18:54:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.143 18:54:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.143 18:54:07 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:41.143 18:54:07 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:41.403 18:54:07 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:41.403 18:54:07 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:41.403 18:54:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.403 18:54:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.403 18:54:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.403 18:54:07 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:41.403 18:54:07 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:41.403 18:54:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.403 18:54:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.403 18:54:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.403 18:54:07 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:41.403 { 00:04:41.403 "name": "Malloc0", 00:04:41.403 "aliases": [ 00:04:41.403 "27cfac2e-11af-45d0-bb37-8d777dad1143" 00:04:41.403 ], 00:04:41.403 "product_name": "Malloc disk", 00:04:41.403 "block_size": 512, 00:04:41.403 "num_blocks": 16384, 00:04:41.403 "uuid": "27cfac2e-11af-45d0-bb37-8d777dad1143", 00:04:41.403 "assigned_rate_limits": { 00:04:41.403 "rw_ios_per_sec": 0, 00:04:41.403 "rw_mbytes_per_sec": 0, 00:04:41.403 "r_mbytes_per_sec": 0, 00:04:41.403 "w_mbytes_per_sec": 0 00:04:41.403 }, 00:04:41.403 "claimed": false, 00:04:41.403 "zoned": false, 00:04:41.403 "supported_io_types": { 00:04:41.403 "read": true, 00:04:41.403 "write": true, 00:04:41.403 "unmap": true, 00:04:41.403 "flush": true, 00:04:41.403 "reset": true, 00:04:41.403 "nvme_admin": false, 00:04:41.403 "nvme_io": false, 00:04:41.403 "nvme_io_md": false, 00:04:41.403 "write_zeroes": true, 00:04:41.403 "zcopy": true, 00:04:41.403 "get_zone_info": false, 00:04:41.403 "zone_management": false, 00:04:41.403 "zone_append": false, 00:04:41.403 "compare": false, 00:04:41.403 "compare_and_write": false, 00:04:41.403 "abort": true, 00:04:41.403 "seek_hole": false, 00:04:41.403 "seek_data": false, 00:04:41.403 "copy": true, 00:04:41.403 "nvme_iov_md": false 00:04:41.403 }, 00:04:41.403 "memory_domains": [ 00:04:41.403 { 00:04:41.403 "dma_device_id": "system", 00:04:41.403 "dma_device_type": 1 00:04:41.403 }, 00:04:41.403 { 00:04:41.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.403 "dma_device_type": 2 00:04:41.403 } 00:04:41.403 ], 00:04:41.403 "driver_specific": {} 00:04:41.403 } 00:04:41.403 ]' 00:04:41.403 18:54:07 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:41.403 18:54:07 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:41.403 18:54:07 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:41.403 18:54:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.403 18:54:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.403 [2024-11-26 18:54:07.916086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:41.403 [2024-11-26 18:54:07.916178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:41.403 [2024-11-26 18:54:07.916246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:41.404 [2024-11-26 18:54:07.916274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:41.404 [2024-11-26 18:54:07.920402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:41.404 [2024-11-26 18:54:07.920463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:41.404 Passthru0 00:04:41.404 18:54:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.404 18:54:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:41.404 18:54:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.404 18:54:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.404 18:54:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.404 18:54:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:41.404 { 00:04:41.404 "name": "Malloc0", 00:04:41.404 "aliases": [ 00:04:41.404 "27cfac2e-11af-45d0-bb37-8d777dad1143" 00:04:41.404 ], 00:04:41.404 "product_name": "Malloc disk", 00:04:41.404 "block_size": 512, 00:04:41.404 "num_blocks": 16384, 00:04:41.404 "uuid": "27cfac2e-11af-45d0-bb37-8d777dad1143", 00:04:41.404 "assigned_rate_limits": { 00:04:41.404 "rw_ios_per_sec": 0, 00:04:41.404 "rw_mbytes_per_sec": 0, 00:04:41.404 "r_mbytes_per_sec": 0, 00:04:41.404 "w_mbytes_per_sec": 0 00:04:41.404 }, 00:04:41.404 "claimed": true, 00:04:41.404 "claim_type": "exclusive_write", 00:04:41.404 "zoned": false, 00:04:41.404 "supported_io_types": { 00:04:41.404 "read": true, 00:04:41.404 "write": true, 00:04:41.404 "unmap": true, 00:04:41.404 "flush": true, 00:04:41.404 "reset": true, 00:04:41.404 "nvme_admin": false, 00:04:41.404 "nvme_io": false, 00:04:41.404 "nvme_io_md": false, 00:04:41.404 "write_zeroes": true, 00:04:41.404 "zcopy": true, 00:04:41.404 "get_zone_info": false, 00:04:41.404 "zone_management": false, 00:04:41.404 "zone_append": false, 00:04:41.404 "compare": false, 00:04:41.404 "compare_and_write": false, 00:04:41.404 "abort": true, 00:04:41.404 "seek_hole": false, 00:04:41.404 "seek_data": false, 00:04:41.404 "copy": true, 00:04:41.404 "nvme_iov_md": false 00:04:41.404 }, 00:04:41.404 "memory_domains": [ 00:04:41.404 { 00:04:41.404 "dma_device_id": "system", 00:04:41.404 "dma_device_type": 1 00:04:41.404 }, 00:04:41.404 { 00:04:41.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.404 "dma_device_type": 2 00:04:41.404 } 00:04:41.404 ], 00:04:41.404 "driver_specific": {} 00:04:41.404 }, 00:04:41.404 { 00:04:41.404 "name": "Passthru0", 00:04:41.404 "aliases": [ 00:04:41.404 "529d83a9-014c-5c11-b5a9-dbe128be1e66" 00:04:41.404 ], 00:04:41.404 "product_name": "passthru", 00:04:41.404 "block_size": 512, 00:04:41.404 "num_blocks": 16384, 00:04:41.404 "uuid": "529d83a9-014c-5c11-b5a9-dbe128be1e66", 00:04:41.404 "assigned_rate_limits": { 00:04:41.404 "rw_ios_per_sec": 0, 00:04:41.404 "rw_mbytes_per_sec": 0, 00:04:41.404 "r_mbytes_per_sec": 0, 00:04:41.404 "w_mbytes_per_sec": 0 00:04:41.404 }, 00:04:41.404 "claimed": false, 00:04:41.404 "zoned": false, 00:04:41.404 "supported_io_types": { 00:04:41.404 "read": true, 00:04:41.404 "write": true, 00:04:41.404 "unmap": true, 00:04:41.404 "flush": true, 00:04:41.404 "reset": true, 00:04:41.404 "nvme_admin": false, 00:04:41.404 "nvme_io": false, 00:04:41.404 "nvme_io_md": false, 00:04:41.404 "write_zeroes": true, 00:04:41.404 "zcopy": true, 00:04:41.404 "get_zone_info": false, 00:04:41.404 "zone_management": false, 00:04:41.404 "zone_append": false, 00:04:41.404 "compare": false, 00:04:41.404 "compare_and_write": false, 00:04:41.404 "abort": true, 00:04:41.404 "seek_hole": false, 00:04:41.404 "seek_data": false, 00:04:41.404 "copy": true, 00:04:41.404 "nvme_iov_md": false 00:04:41.404 }, 00:04:41.404 "memory_domains": [ 00:04:41.404 { 00:04:41.404 "dma_device_id": "system", 00:04:41.404 "dma_device_type": 1 00:04:41.404 }, 00:04:41.404 { 00:04:41.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.404 "dma_device_type": 2 00:04:41.404 } 00:04:41.404 ], 00:04:41.404 "driver_specific": { 00:04:41.404 "passthru": { 00:04:41.404 "name": "Passthru0", 00:04:41.404 "base_bdev_name": "Malloc0" 00:04:41.404 } 00:04:41.404 } 00:04:41.404 } 00:04:41.404 ]' 00:04:41.404 18:54:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:41.404 18:54:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:41.404 18:54:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:41.404 18:54:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.404 18:54:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.404 18:54:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.404 18:54:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:41.404 18:54:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.404 18:54:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.663 18:54:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.663 18:54:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:41.663 18:54:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.663 18:54:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.663 18:54:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.663 18:54:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:41.663 18:54:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:41.663 18:54:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:41.663 00:04:41.663 real 0m0.368s 00:04:41.663 user 0m0.223s 00:04:41.663 sys 0m0.045s 00:04:41.663 18:54:08 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.663 18:54:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.663 ************************************ 00:04:41.663 END TEST rpc_integrity 00:04:41.663 ************************************ 00:04:41.663 18:54:08 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:41.663 18:54:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.663 18:54:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.663 18:54:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.663 ************************************ 00:04:41.663 START TEST rpc_plugins 00:04:41.663 ************************************ 00:04:41.663 18:54:08 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:41.663 18:54:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:41.663 18:54:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.663 18:54:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.663 18:54:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.663 18:54:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:41.663 18:54:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:41.663 18:54:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.663 18:54:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.663 18:54:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.663 18:54:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:41.663 { 00:04:41.663 "name": "Malloc1", 00:04:41.663 "aliases": [ 00:04:41.663 "8a20e4f7-af2c-4f7b-a6ce-a601321a84b6" 00:04:41.663 ], 00:04:41.663 "product_name": "Malloc disk", 00:04:41.663 "block_size": 4096, 00:04:41.663 "num_blocks": 256, 00:04:41.663 "uuid": "8a20e4f7-af2c-4f7b-a6ce-a601321a84b6", 00:04:41.663 "assigned_rate_limits": { 00:04:41.663 "rw_ios_per_sec": 0, 00:04:41.663 "rw_mbytes_per_sec": 0, 00:04:41.663 "r_mbytes_per_sec": 0, 00:04:41.663 "w_mbytes_per_sec": 0 00:04:41.663 }, 00:04:41.663 "claimed": false, 00:04:41.663 "zoned": false, 00:04:41.663 "supported_io_types": { 00:04:41.663 "read": true, 00:04:41.663 "write": true, 00:04:41.663 "unmap": true, 00:04:41.663 "flush": true, 00:04:41.663 "reset": true, 00:04:41.663 "nvme_admin": false, 00:04:41.663 "nvme_io": false, 00:04:41.663 "nvme_io_md": false, 00:04:41.663 "write_zeroes": true, 00:04:41.663 "zcopy": true, 00:04:41.663 "get_zone_info": false, 00:04:41.663 "zone_management": false, 00:04:41.663 "zone_append": false, 00:04:41.663 "compare": false, 00:04:41.663 "compare_and_write": false, 00:04:41.663 "abort": true, 00:04:41.663 "seek_hole": false, 00:04:41.663 "seek_data": false, 00:04:41.663 "copy": true, 00:04:41.663 "nvme_iov_md": false 00:04:41.663 }, 00:04:41.663 "memory_domains": [ 00:04:41.663 { 00:04:41.663 "dma_device_id": "system", 00:04:41.664 "dma_device_type": 1 00:04:41.664 }, 00:04:41.664 { 00:04:41.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.664 "dma_device_type": 2 00:04:41.664 } 00:04:41.664 ], 00:04:41.664 "driver_specific": {} 00:04:41.664 } 00:04:41.664 ]' 00:04:41.664 18:54:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:41.664 18:54:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:41.664 18:54:08 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:41.664 18:54:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.664 18:54:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.664 18:54:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.664 18:54:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:41.664 18:54:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.664 18:54:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.664 18:54:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.664 18:54:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:41.664 18:54:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:41.923 18:54:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:41.923 00:04:41.923 real 0m0.161s 00:04:41.923 user 0m0.105s 00:04:41.923 sys 0m0.017s 00:04:41.923 18:54:08 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.923 ************************************ 00:04:41.923 END TEST rpc_plugins 00:04:41.923 18:54:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.923 ************************************ 00:04:41.923 18:54:08 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:41.923 18:54:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.923 18:54:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.923 18:54:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.923 ************************************ 00:04:41.923 START TEST rpc_trace_cmd_test 00:04:41.923 ************************************ 00:04:41.923 18:54:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:41.923 18:54:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:41.923 18:54:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:41.923 18:54:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.923 18:54:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:41.923 18:54:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.923 18:54:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:41.923 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57042", 00:04:41.923 "tpoint_group_mask": "0x8", 00:04:41.923 "iscsi_conn": { 00:04:41.923 "mask": "0x2", 00:04:41.923 "tpoint_mask": "0x0" 00:04:41.923 }, 00:04:41.923 "scsi": { 00:04:41.923 "mask": "0x4", 00:04:41.923 "tpoint_mask": "0x0" 00:04:41.923 }, 00:04:41.923 "bdev": { 00:04:41.923 "mask": "0x8", 00:04:41.923 "tpoint_mask": "0xffffffffffffffff" 00:04:41.923 }, 00:04:41.923 "nvmf_rdma": { 00:04:41.923 "mask": "0x10", 00:04:41.923 "tpoint_mask": "0x0" 00:04:41.923 }, 00:04:41.923 "nvmf_tcp": { 00:04:41.923 "mask": "0x20", 00:04:41.923 "tpoint_mask": "0x0" 00:04:41.923 }, 00:04:41.923 "ftl": { 00:04:41.923 "mask": "0x40", 00:04:41.923 "tpoint_mask": "0x0" 00:04:41.923 }, 00:04:41.923 "blobfs": { 00:04:41.923 "mask": "0x80", 00:04:41.923 "tpoint_mask": "0x0" 00:04:41.923 }, 00:04:41.923 "dsa": { 00:04:41.923 "mask": "0x200", 00:04:41.923 "tpoint_mask": "0x0" 00:04:41.923 }, 00:04:41.923 "thread": { 00:04:41.923 "mask": "0x400", 00:04:41.923 "tpoint_mask": "0x0" 00:04:41.923 }, 00:04:41.923 "nvme_pcie": { 00:04:41.923 "mask": "0x800", 00:04:41.923 "tpoint_mask": "0x0" 00:04:41.923 }, 00:04:41.923 "iaa": { 00:04:41.923 "mask": "0x1000", 00:04:41.923 "tpoint_mask": "0x0" 00:04:41.923 }, 00:04:41.923 "nvme_tcp": { 00:04:41.923 "mask": "0x2000", 00:04:41.923 "tpoint_mask": "0x0" 00:04:41.923 }, 00:04:41.923 "bdev_nvme": { 00:04:41.923 "mask": "0x4000", 00:04:41.923 "tpoint_mask": "0x0" 00:04:41.923 }, 00:04:41.923 "sock": { 00:04:41.923 "mask": "0x8000", 00:04:41.923 "tpoint_mask": "0x0" 00:04:41.923 }, 00:04:41.923 "blob": { 00:04:41.923 "mask": "0x10000", 00:04:41.923 "tpoint_mask": "0x0" 00:04:41.923 }, 00:04:41.923 "bdev_raid": { 00:04:41.923 "mask": "0x20000", 00:04:41.923 "tpoint_mask": "0x0" 00:04:41.923 }, 00:04:41.923 "scheduler": { 00:04:41.923 "mask": "0x40000", 00:04:41.923 "tpoint_mask": "0x0" 00:04:41.923 } 00:04:41.923 }' 00:04:41.923 18:54:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:41.923 18:54:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:41.923 18:54:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:41.923 18:54:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:41.923 18:54:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:42.182 18:54:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:42.182 18:54:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:42.182 18:54:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:42.182 18:54:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:42.182 18:54:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:42.182 00:04:42.182 real 0m0.279s 00:04:42.182 user 0m0.248s 00:04:42.182 sys 0m0.023s 00:04:42.182 18:54:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.182 ************************************ 00:04:42.182 END TEST rpc_trace_cmd_test 00:04:42.182 ************************************ 00:04:42.182 18:54:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:42.182 18:54:08 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:42.182 18:54:08 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:42.182 18:54:08 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:42.182 18:54:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.182 18:54:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.182 18:54:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.182 ************************************ 00:04:42.182 START TEST rpc_daemon_integrity 00:04:42.182 ************************************ 00:04:42.182 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:42.182 18:54:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:42.182 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.182 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.182 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.182 18:54:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:42.182 18:54:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:42.182 18:54:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:42.182 18:54:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:42.182 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.182 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.182 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.182 18:54:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:42.182 18:54:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:42.182 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.182 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:42.442 { 00:04:42.442 "name": "Malloc2", 00:04:42.442 "aliases": [ 00:04:42.442 "93eff287-fa69-4f99-9d53-6fb5764c0d79" 00:04:42.442 ], 00:04:42.442 "product_name": "Malloc disk", 00:04:42.442 "block_size": 512, 00:04:42.442 "num_blocks": 16384, 00:04:42.442 "uuid": "93eff287-fa69-4f99-9d53-6fb5764c0d79", 00:04:42.442 "assigned_rate_limits": { 00:04:42.442 "rw_ios_per_sec": 0, 00:04:42.442 "rw_mbytes_per_sec": 0, 00:04:42.442 "r_mbytes_per_sec": 0, 00:04:42.442 "w_mbytes_per_sec": 0 00:04:42.442 }, 00:04:42.442 "claimed": false, 00:04:42.442 "zoned": false, 00:04:42.442 "supported_io_types": { 00:04:42.442 "read": true, 00:04:42.442 "write": true, 00:04:42.442 "unmap": true, 00:04:42.442 "flush": true, 00:04:42.442 "reset": true, 00:04:42.442 "nvme_admin": false, 00:04:42.442 "nvme_io": false, 00:04:42.442 "nvme_io_md": false, 00:04:42.442 "write_zeroes": true, 00:04:42.442 "zcopy": true, 00:04:42.442 "get_zone_info": false, 00:04:42.442 "zone_management": false, 00:04:42.442 "zone_append": false, 00:04:42.442 "compare": false, 00:04:42.442 "compare_and_write": false, 00:04:42.442 "abort": true, 00:04:42.442 "seek_hole": false, 00:04:42.442 "seek_data": false, 00:04:42.442 "copy": true, 00:04:42.442 "nvme_iov_md": false 00:04:42.442 }, 00:04:42.442 "memory_domains": [ 00:04:42.442 { 00:04:42.442 "dma_device_id": "system", 00:04:42.442 "dma_device_type": 1 00:04:42.442 }, 00:04:42.442 { 00:04:42.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.442 "dma_device_type": 2 00:04:42.442 } 00:04:42.442 ], 00:04:42.442 "driver_specific": {} 00:04:42.442 } 00:04:42.442 ]' 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.442 [2024-11-26 18:54:08.866080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:42.442 [2024-11-26 18:54:08.866192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:42.442 [2024-11-26 18:54:08.866236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:42.442 [2024-11-26 18:54:08.866259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:42.442 [2024-11-26 18:54:08.869420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:42.442 [2024-11-26 18:54:08.869482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:42.442 Passthru0 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:42.442 { 00:04:42.442 "name": "Malloc2", 00:04:42.442 "aliases": [ 00:04:42.442 "93eff287-fa69-4f99-9d53-6fb5764c0d79" 00:04:42.442 ], 00:04:42.442 "product_name": "Malloc disk", 00:04:42.442 "block_size": 512, 00:04:42.442 "num_blocks": 16384, 00:04:42.442 "uuid": "93eff287-fa69-4f99-9d53-6fb5764c0d79", 00:04:42.442 "assigned_rate_limits": { 00:04:42.442 "rw_ios_per_sec": 0, 00:04:42.442 "rw_mbytes_per_sec": 0, 00:04:42.442 "r_mbytes_per_sec": 0, 00:04:42.442 "w_mbytes_per_sec": 0 00:04:42.442 }, 00:04:42.442 "claimed": true, 00:04:42.442 "claim_type": "exclusive_write", 00:04:42.442 "zoned": false, 00:04:42.442 "supported_io_types": { 00:04:42.442 "read": true, 00:04:42.442 "write": true, 00:04:42.442 "unmap": true, 00:04:42.442 "flush": true, 00:04:42.442 "reset": true, 00:04:42.442 "nvme_admin": false, 00:04:42.442 "nvme_io": false, 00:04:42.442 "nvme_io_md": false, 00:04:42.442 "write_zeroes": true, 00:04:42.442 "zcopy": true, 00:04:42.442 "get_zone_info": false, 00:04:42.442 "zone_management": false, 00:04:42.442 "zone_append": false, 00:04:42.442 "compare": false, 00:04:42.442 "compare_and_write": false, 00:04:42.442 "abort": true, 00:04:42.442 "seek_hole": false, 00:04:42.442 "seek_data": false, 00:04:42.442 "copy": true, 00:04:42.442 "nvme_iov_md": false 00:04:42.442 }, 00:04:42.442 "memory_domains": [ 00:04:42.442 { 00:04:42.442 "dma_device_id": "system", 00:04:42.442 "dma_device_type": 1 00:04:42.442 }, 00:04:42.442 { 00:04:42.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.442 "dma_device_type": 2 00:04:42.442 } 00:04:42.442 ], 00:04:42.442 "driver_specific": {} 00:04:42.442 }, 00:04:42.442 { 00:04:42.442 "name": "Passthru0", 00:04:42.442 "aliases": [ 00:04:42.442 "c5ed4f7a-1e07-5e99-80e2-3f1c5e8034b3" 00:04:42.442 ], 00:04:42.442 "product_name": "passthru", 00:04:42.442 "block_size": 512, 00:04:42.442 "num_blocks": 16384, 00:04:42.442 "uuid": "c5ed4f7a-1e07-5e99-80e2-3f1c5e8034b3", 00:04:42.442 "assigned_rate_limits": { 00:04:42.442 "rw_ios_per_sec": 0, 00:04:42.442 "rw_mbytes_per_sec": 0, 00:04:42.442 "r_mbytes_per_sec": 0, 00:04:42.442 "w_mbytes_per_sec": 0 00:04:42.442 }, 00:04:42.442 "claimed": false, 00:04:42.442 "zoned": false, 00:04:42.442 "supported_io_types": { 00:04:42.442 "read": true, 00:04:42.442 "write": true, 00:04:42.442 "unmap": true, 00:04:42.442 "flush": true, 00:04:42.442 "reset": true, 00:04:42.442 "nvme_admin": false, 00:04:42.442 "nvme_io": false, 00:04:42.442 "nvme_io_md": false, 00:04:42.442 "write_zeroes": true, 00:04:42.442 "zcopy": true, 00:04:42.442 "get_zone_info": false, 00:04:42.442 "zone_management": false, 00:04:42.442 "zone_append": false, 00:04:42.442 "compare": false, 00:04:42.442 "compare_and_write": false, 00:04:42.442 "abort": true, 00:04:42.442 "seek_hole": false, 00:04:42.442 "seek_data": false, 00:04:42.442 "copy": true, 00:04:42.442 "nvme_iov_md": false 00:04:42.442 }, 00:04:42.442 "memory_domains": [ 00:04:42.442 { 00:04:42.442 "dma_device_id": "system", 00:04:42.442 "dma_device_type": 1 00:04:42.442 }, 00:04:42.442 { 00:04:42.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.442 "dma_device_type": 2 00:04:42.442 } 00:04:42.442 ], 00:04:42.442 "driver_specific": { 00:04:42.442 "passthru": { 00:04:42.442 "name": "Passthru0", 00:04:42.442 "base_bdev_name": "Malloc2" 00:04:42.442 } 00:04:42.442 } 00:04:42.442 } 00:04:42.442 ]' 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.442 18:54:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.442 18:54:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.442 18:54:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:42.442 18:54:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:42.442 18:54:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:42.442 00:04:42.442 real 0m0.345s 00:04:42.442 user 0m0.200s 00:04:42.442 sys 0m0.052s 00:04:42.442 18:54:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.442 ************************************ 00:04:42.442 END TEST rpc_daemon_integrity 00:04:42.442 18:54:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.442 ************************************ 00:04:42.701 18:54:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:42.701 18:54:09 rpc -- rpc/rpc.sh@84 -- # killprocess 57042 00:04:42.701 18:54:09 rpc -- common/autotest_common.sh@954 -- # '[' -z 57042 ']' 00:04:42.701 18:54:09 rpc -- common/autotest_common.sh@958 -- # kill -0 57042 00:04:42.701 18:54:09 rpc -- common/autotest_common.sh@959 -- # uname 00:04:42.701 18:54:09 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.701 18:54:09 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57042 00:04:42.701 18:54:09 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.701 18:54:09 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.701 18:54:09 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57042' 00:04:42.701 killing process with pid 57042 00:04:42.701 18:54:09 rpc -- common/autotest_common.sh@973 -- # kill 57042 00:04:42.701 18:54:09 rpc -- common/autotest_common.sh@978 -- # wait 57042 00:04:45.309 00:04:45.309 real 0m5.399s 00:04:45.309 user 0m6.039s 00:04:45.309 sys 0m1.003s 00:04:45.309 18:54:11 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.309 18:54:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.309 ************************************ 00:04:45.309 END TEST rpc 00:04:45.309 ************************************ 00:04:45.309 18:54:11 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:45.309 18:54:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.309 18:54:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.309 18:54:11 -- common/autotest_common.sh@10 -- # set +x 00:04:45.309 ************************************ 00:04:45.309 START TEST skip_rpc 00:04:45.309 ************************************ 00:04:45.309 18:54:11 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:45.309 * Looking for test storage... 00:04:45.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:45.309 18:54:11 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:45.309 18:54:11 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:45.309 18:54:11 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:45.309 18:54:11 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.309 18:54:11 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:45.309 18:54:11 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.309 18:54:11 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:45.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.309 --rc genhtml_branch_coverage=1 00:04:45.309 --rc genhtml_function_coverage=1 00:04:45.309 --rc genhtml_legend=1 00:04:45.309 --rc geninfo_all_blocks=1 00:04:45.309 --rc geninfo_unexecuted_blocks=1 00:04:45.309 00:04:45.309 ' 00:04:45.309 18:54:11 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:45.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.309 --rc genhtml_branch_coverage=1 00:04:45.309 --rc genhtml_function_coverage=1 00:04:45.309 --rc genhtml_legend=1 00:04:45.309 --rc geninfo_all_blocks=1 00:04:45.309 --rc geninfo_unexecuted_blocks=1 00:04:45.309 00:04:45.309 ' 00:04:45.309 18:54:11 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:45.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.309 --rc genhtml_branch_coverage=1 00:04:45.309 --rc genhtml_function_coverage=1 00:04:45.309 --rc genhtml_legend=1 00:04:45.309 --rc geninfo_all_blocks=1 00:04:45.309 --rc geninfo_unexecuted_blocks=1 00:04:45.309 00:04:45.309 ' 00:04:45.309 18:54:11 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:45.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.309 --rc genhtml_branch_coverage=1 00:04:45.310 --rc genhtml_function_coverage=1 00:04:45.310 --rc genhtml_legend=1 00:04:45.310 --rc geninfo_all_blocks=1 00:04:45.310 --rc geninfo_unexecuted_blocks=1 00:04:45.310 00:04:45.310 ' 00:04:45.310 18:54:11 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:45.310 18:54:11 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:45.310 18:54:11 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:45.310 18:54:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.310 18:54:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.310 18:54:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.310 ************************************ 00:04:45.310 START TEST skip_rpc 00:04:45.310 ************************************ 00:04:45.310 18:54:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:45.310 18:54:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57277 00:04:45.310 18:54:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.310 18:54:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:45.310 18:54:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:45.310 [2024-11-26 18:54:11.806432] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:04:45.310 [2024-11-26 18:54:11.806632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57277 ] 00:04:45.567 [2024-11-26 18:54:12.005419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.824 [2024-11-26 18:54:12.222367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.090 18:54:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57277 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57277 ']' 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57277 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57277 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.091 killing process with pid 57277 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57277' 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57277 00:04:51.091 18:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57277 00:04:52.990 00:04:52.990 real 0m7.639s 00:04:52.991 user 0m7.012s 00:04:52.991 sys 0m0.508s 00:04:52.991 18:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.991 18:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.991 ************************************ 00:04:52.991 END TEST skip_rpc 00:04:52.991 ************************************ 00:04:52.991 18:54:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:52.991 18:54:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.991 18:54:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.991 18:54:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.991 ************************************ 00:04:52.991 START TEST skip_rpc_with_json 00:04:52.991 ************************************ 00:04:52.991 18:54:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:52.991 18:54:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:52.991 18:54:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57386 00:04:52.991 18:54:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.991 18:54:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.991 18:54:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57386 00:04:52.991 18:54:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57386 ']' 00:04:52.991 18:54:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.991 18:54:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.991 18:54:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.991 18:54:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.991 18:54:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.991 [2024-11-26 18:54:19.489715] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:04:52.991 [2024-11-26 18:54:19.489910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57386 ] 00:04:53.249 [2024-11-26 18:54:19.675426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.249 [2024-11-26 18:54:19.825310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.624 18:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.624 18:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:54.624 18:54:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:54.624 18:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.624 18:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.624 [2024-11-26 18:54:20.817669] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:54.624 request: 00:04:54.624 { 00:04:54.624 "trtype": "tcp", 00:04:54.624 "method": "nvmf_get_transports", 00:04:54.624 "req_id": 1 00:04:54.624 } 00:04:54.624 Got JSON-RPC error response 00:04:54.624 response: 00:04:54.624 { 00:04:54.624 "code": -19, 00:04:54.624 "message": "No such device" 00:04:54.624 } 00:04:54.624 18:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:54.624 18:54:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:54.624 18:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.624 18:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.624 [2024-11-26 18:54:20.829886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:54.624 18:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.624 18:54:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:54.624 18:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.624 18:54:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.624 18:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.624 18:54:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:54.624 { 00:04:54.624 "subsystems": [ 00:04:54.624 { 00:04:54.624 "subsystem": "fsdev", 00:04:54.624 "config": [ 00:04:54.624 { 00:04:54.624 "method": "fsdev_set_opts", 00:04:54.624 "params": { 00:04:54.624 "fsdev_io_pool_size": 65535, 00:04:54.624 "fsdev_io_cache_size": 256 00:04:54.624 } 00:04:54.624 } 00:04:54.624 ] 00:04:54.624 }, 00:04:54.624 { 00:04:54.624 "subsystem": "keyring", 00:04:54.624 "config": [] 00:04:54.624 }, 00:04:54.624 { 00:04:54.624 "subsystem": "iobuf", 00:04:54.624 "config": [ 00:04:54.624 { 00:04:54.624 "method": "iobuf_set_options", 00:04:54.624 "params": { 00:04:54.624 "small_pool_count": 8192, 00:04:54.624 "large_pool_count": 1024, 00:04:54.624 "small_bufsize": 8192, 00:04:54.624 "large_bufsize": 135168, 00:04:54.624 "enable_numa": false 00:04:54.624 } 00:04:54.624 } 00:04:54.624 ] 00:04:54.624 }, 00:04:54.624 { 00:04:54.624 "subsystem": "sock", 00:04:54.624 "config": [ 00:04:54.624 { 00:04:54.624 "method": "sock_set_default_impl", 00:04:54.624 "params": { 00:04:54.624 "impl_name": "posix" 00:04:54.624 } 00:04:54.624 }, 00:04:54.624 { 00:04:54.624 "method": "sock_impl_set_options", 00:04:54.624 "params": { 00:04:54.624 "impl_name": "ssl", 00:04:54.624 "recv_buf_size": 4096, 00:04:54.624 "send_buf_size": 4096, 00:04:54.624 "enable_recv_pipe": true, 00:04:54.624 "enable_quickack": false, 00:04:54.624 "enable_placement_id": 0, 00:04:54.624 "enable_zerocopy_send_server": true, 00:04:54.624 "enable_zerocopy_send_client": false, 00:04:54.624 "zerocopy_threshold": 0, 00:04:54.624 "tls_version": 0, 00:04:54.624 "enable_ktls": false 00:04:54.624 } 00:04:54.624 }, 00:04:54.624 { 00:04:54.624 "method": "sock_impl_set_options", 00:04:54.624 "params": { 00:04:54.624 "impl_name": "posix", 00:04:54.624 "recv_buf_size": 2097152, 00:04:54.624 "send_buf_size": 2097152, 00:04:54.624 "enable_recv_pipe": true, 00:04:54.624 "enable_quickack": false, 00:04:54.624 "enable_placement_id": 0, 00:04:54.624 "enable_zerocopy_send_server": true, 00:04:54.624 "enable_zerocopy_send_client": false, 00:04:54.624 "zerocopy_threshold": 0, 00:04:54.624 "tls_version": 0, 00:04:54.624 "enable_ktls": false 00:04:54.624 } 00:04:54.624 } 00:04:54.624 ] 00:04:54.624 }, 00:04:54.624 { 00:04:54.624 "subsystem": "vmd", 00:04:54.624 "config": [] 00:04:54.624 }, 00:04:54.624 { 00:04:54.624 "subsystem": "accel", 00:04:54.624 "config": [ 00:04:54.624 { 00:04:54.624 "method": "accel_set_options", 00:04:54.624 "params": { 00:04:54.624 "small_cache_size": 128, 00:04:54.624 "large_cache_size": 16, 00:04:54.624 "task_count": 2048, 00:04:54.624 "sequence_count": 2048, 00:04:54.624 "buf_count": 2048 00:04:54.624 } 00:04:54.624 } 00:04:54.624 ] 00:04:54.624 }, 00:04:54.624 { 00:04:54.624 "subsystem": "bdev", 00:04:54.624 "config": [ 00:04:54.624 { 00:04:54.624 "method": "bdev_set_options", 00:04:54.624 "params": { 00:04:54.624 "bdev_io_pool_size": 65535, 00:04:54.624 "bdev_io_cache_size": 256, 00:04:54.624 "bdev_auto_examine": true, 00:04:54.624 "iobuf_small_cache_size": 128, 00:04:54.624 "iobuf_large_cache_size": 16 00:04:54.625 } 00:04:54.625 }, 00:04:54.625 { 00:04:54.625 "method": "bdev_raid_set_options", 00:04:54.625 "params": { 00:04:54.625 "process_window_size_kb": 1024, 00:04:54.625 "process_max_bandwidth_mb_sec": 0 00:04:54.625 } 00:04:54.625 }, 00:04:54.625 { 00:04:54.625 "method": "bdev_iscsi_set_options", 00:04:54.625 "params": { 00:04:54.625 "timeout_sec": 30 00:04:54.625 } 00:04:54.625 }, 00:04:54.625 { 00:04:54.625 "method": "bdev_nvme_set_options", 00:04:54.625 "params": { 00:04:54.625 "action_on_timeout": "none", 00:04:54.625 "timeout_us": 0, 00:04:54.625 "timeout_admin_us": 0, 00:04:54.625 "keep_alive_timeout_ms": 10000, 00:04:54.625 "arbitration_burst": 0, 00:04:54.625 "low_priority_weight": 0, 00:04:54.625 "medium_priority_weight": 0, 00:04:54.625 "high_priority_weight": 0, 00:04:54.625 "nvme_adminq_poll_period_us": 10000, 00:04:54.625 "nvme_ioq_poll_period_us": 0, 00:04:54.625 "io_queue_requests": 0, 00:04:54.625 "delay_cmd_submit": true, 00:04:54.625 "transport_retry_count": 4, 00:04:54.625 "bdev_retry_count": 3, 00:04:54.625 "transport_ack_timeout": 0, 00:04:54.625 "ctrlr_loss_timeout_sec": 0, 00:04:54.625 "reconnect_delay_sec": 0, 00:04:54.625 "fast_io_fail_timeout_sec": 0, 00:04:54.625 "disable_auto_failback": false, 00:04:54.625 "generate_uuids": false, 00:04:54.625 "transport_tos": 0, 00:04:54.625 "nvme_error_stat": false, 00:04:54.625 "rdma_srq_size": 0, 00:04:54.625 "io_path_stat": false, 00:04:54.625 "allow_accel_sequence": false, 00:04:54.625 "rdma_max_cq_size": 0, 00:04:54.625 "rdma_cm_event_timeout_ms": 0, 00:04:54.625 "dhchap_digests": [ 00:04:54.625 "sha256", 00:04:54.625 "sha384", 00:04:54.625 "sha512" 00:04:54.625 ], 00:04:54.625 "dhchap_dhgroups": [ 00:04:54.625 "null", 00:04:54.625 "ffdhe2048", 00:04:54.625 "ffdhe3072", 00:04:54.625 "ffdhe4096", 00:04:54.625 "ffdhe6144", 00:04:54.625 "ffdhe8192" 00:04:54.625 ] 00:04:54.625 } 00:04:54.625 }, 00:04:54.625 { 00:04:54.625 "method": "bdev_nvme_set_hotplug", 00:04:54.625 "params": { 00:04:54.625 "period_us": 100000, 00:04:54.625 "enable": false 00:04:54.625 } 00:04:54.625 }, 00:04:54.625 { 00:04:54.625 "method": "bdev_wait_for_examine" 00:04:54.625 } 00:04:54.625 ] 00:04:54.625 }, 00:04:54.625 { 00:04:54.625 "subsystem": "scsi", 00:04:54.625 "config": null 00:04:54.625 }, 00:04:54.625 { 00:04:54.625 "subsystem": "scheduler", 00:04:54.625 "config": [ 00:04:54.625 { 00:04:54.625 "method": "framework_set_scheduler", 00:04:54.625 "params": { 00:04:54.625 "name": "static" 00:04:54.625 } 00:04:54.625 } 00:04:54.625 ] 00:04:54.625 }, 00:04:54.625 { 00:04:54.625 "subsystem": "vhost_scsi", 00:04:54.625 "config": [] 00:04:54.625 }, 00:04:54.625 { 00:04:54.625 "subsystem": "vhost_blk", 00:04:54.625 "config": [] 00:04:54.625 }, 00:04:54.625 { 00:04:54.625 "subsystem": "ublk", 00:04:54.625 "config": [] 00:04:54.625 }, 00:04:54.625 { 00:04:54.625 "subsystem": "nbd", 00:04:54.625 "config": [] 00:04:54.625 }, 00:04:54.625 { 00:04:54.625 "subsystem": "nvmf", 00:04:54.625 "config": [ 00:04:54.625 { 00:04:54.625 "method": "nvmf_set_config", 00:04:54.625 "params": { 00:04:54.625 "discovery_filter": "match_any", 00:04:54.625 "admin_cmd_passthru": { 00:04:54.625 "identify_ctrlr": false 00:04:54.625 }, 00:04:54.625 "dhchap_digests": [ 00:04:54.625 "sha256", 00:04:54.625 "sha384", 00:04:54.625 "sha512" 00:04:54.625 ], 00:04:54.625 "dhchap_dhgroups": [ 00:04:54.625 "null", 00:04:54.625 "ffdhe2048", 00:04:54.625 "ffdhe3072", 00:04:54.625 "ffdhe4096", 00:04:54.625 "ffdhe6144", 00:04:54.625 "ffdhe8192" 00:04:54.625 ] 00:04:54.625 } 00:04:54.625 }, 00:04:54.625 { 00:04:54.625 "method": "nvmf_set_max_subsystems", 00:04:54.625 "params": { 00:04:54.625 "max_subsystems": 1024 00:04:54.625 } 00:04:54.625 }, 00:04:54.625 { 00:04:54.625 "method": "nvmf_set_crdt", 00:04:54.625 "params": { 00:04:54.625 "crdt1": 0, 00:04:54.625 "crdt2": 0, 00:04:54.625 "crdt3": 0 00:04:54.625 } 00:04:54.625 }, 00:04:54.625 { 00:04:54.625 "method": "nvmf_create_transport", 00:04:54.625 "params": { 00:04:54.625 "trtype": "TCP", 00:04:54.625 "max_queue_depth": 128, 00:04:54.625 "max_io_qpairs_per_ctrlr": 127, 00:04:54.625 "in_capsule_data_size": 4096, 00:04:54.625 "max_io_size": 131072, 00:04:54.625 "io_unit_size": 131072, 00:04:54.625 "max_aq_depth": 128, 00:04:54.625 "num_shared_buffers": 511, 00:04:54.625 "buf_cache_size": 4294967295, 00:04:54.625 "dif_insert_or_strip": false, 00:04:54.625 "zcopy": false, 00:04:54.625 "c2h_success": true, 00:04:54.625 "sock_priority": 0, 00:04:54.625 "abort_timeout_sec": 1, 00:04:54.625 "ack_timeout": 0, 00:04:54.625 "data_wr_pool_size": 0 00:04:54.625 } 00:04:54.625 } 00:04:54.625 ] 00:04:54.625 }, 00:04:54.625 { 00:04:54.625 "subsystem": "iscsi", 00:04:54.625 "config": [ 00:04:54.625 { 00:04:54.625 "method": "iscsi_set_options", 00:04:54.625 "params": { 00:04:54.625 "node_base": "iqn.2016-06.io.spdk", 00:04:54.625 "max_sessions": 128, 00:04:54.625 "max_connections_per_session": 2, 00:04:54.625 "max_queue_depth": 64, 00:04:54.625 "default_time2wait": 2, 00:04:54.625 "default_time2retain": 20, 00:04:54.625 "first_burst_length": 8192, 00:04:54.626 "immediate_data": true, 00:04:54.626 "allow_duplicated_isid": false, 00:04:54.626 "error_recovery_level": 0, 00:04:54.626 "nop_timeout": 60, 00:04:54.626 "nop_in_interval": 30, 00:04:54.626 "disable_chap": false, 00:04:54.626 "require_chap": false, 00:04:54.626 "mutual_chap": false, 00:04:54.626 "chap_group": 0, 00:04:54.626 "max_large_datain_per_connection": 64, 00:04:54.626 "max_r2t_per_connection": 4, 00:04:54.626 "pdu_pool_size": 36864, 00:04:54.626 "immediate_data_pool_size": 16384, 00:04:54.626 "data_out_pool_size": 2048 00:04:54.626 } 00:04:54.626 } 00:04:54.626 ] 00:04:54.626 } 00:04:54.626 ] 00:04:54.626 } 00:04:54.626 18:54:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:54.626 18:54:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57386 00:04:54.626 18:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57386 ']' 00:04:54.626 18:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57386 00:04:54.626 18:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:54.626 18:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.626 18:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57386 00:04:54.626 18:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.626 killing process with pid 57386 00:04:54.626 18:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.626 18:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57386' 00:04:54.626 18:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57386 00:04:54.626 18:54:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57386 00:04:57.182 18:54:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57441 00:04:57.182 18:54:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:57.182 18:54:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:02.446 18:54:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57441 00:05:02.446 18:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57441 ']' 00:05:02.446 18:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57441 00:05:02.446 18:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:02.446 18:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.446 18:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57441 00:05:02.446 18:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.446 killing process with pid 57441 00:05:02.446 18:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.446 18:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57441' 00:05:02.446 18:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57441 00:05:02.446 18:54:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57441 00:05:04.351 18:54:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:04.351 18:54:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:04.351 00:05:04.351 real 0m11.607s 00:05:04.351 user 0m10.799s 00:05:04.351 sys 0m1.258s 00:05:04.351 18:54:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.351 18:54:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:04.351 ************************************ 00:05:04.351 END TEST skip_rpc_with_json 00:05:04.351 ************************************ 00:05:04.609 18:54:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:04.609 18:54:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.609 18:54:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.609 18:54:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.609 ************************************ 00:05:04.609 START TEST skip_rpc_with_delay 00:05:04.609 ************************************ 00:05:04.609 18:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:04.609 18:54:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.609 18:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:04.609 18:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.609 18:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.609 18:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.609 18:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.609 18:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.609 18:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.609 18:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.609 18:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.609 18:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:04.609 18:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.609 [2024-11-26 18:54:31.156335] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:04.609 18:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:04.609 18:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:04.609 18:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:04.609 18:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:04.609 00:05:04.609 real 0m0.210s 00:05:04.609 user 0m0.106s 00:05:04.609 sys 0m0.101s 00:05:04.609 18:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.609 ************************************ 00:05:04.609 18:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:04.609 END TEST skip_rpc_with_delay 00:05:04.609 ************************************ 00:05:04.867 18:54:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:04.867 18:54:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:04.867 18:54:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:04.867 18:54:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.867 18:54:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.867 18:54:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.867 ************************************ 00:05:04.867 START TEST exit_on_failed_rpc_init 00:05:04.867 ************************************ 00:05:04.867 18:54:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:04.867 18:54:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57576 00:05:04.867 18:54:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57576 00:05:04.867 18:54:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57576 ']' 00:05:04.867 18:54:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.867 18:54:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:04.867 18:54:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.867 18:54:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.867 18:54:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.867 18:54:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:04.867 [2024-11-26 18:54:31.428274] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:05:04.867 [2024-11-26 18:54:31.428522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57576 ] 00:05:05.125 [2024-11-26 18:54:31.618774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.384 [2024-11-26 18:54:31.779015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.340 18:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.340 18:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:06.340 18:54:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.340 18:54:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.340 18:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:06.340 18:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.340 18:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.340 18:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.340 18:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.340 18:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.340 18:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.340 18:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.340 18:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.340 18:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:06.340 18:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.598 [2024-11-26 18:54:33.012393] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:05:06.598 [2024-11-26 18:54:33.012623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57599 ] 00:05:06.598 [2024-11-26 18:54:33.208587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.856 [2024-11-26 18:54:33.365897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.856 [2024-11-26 18:54:33.366054] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:06.856 [2024-11-26 18:54:33.366083] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:06.856 [2024-11-26 18:54:33.366104] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:07.113 18:54:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:07.113 18:54:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:07.113 18:54:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:07.113 18:54:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:07.113 18:54:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:07.113 18:54:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:07.113 18:54:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:07.113 18:54:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57576 00:05:07.113 18:54:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57576 ']' 00:05:07.113 18:54:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57576 00:05:07.113 18:54:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:07.113 18:54:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.113 18:54:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57576 00:05:07.370 18:54:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.370 18:54:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.370 killing process with pid 57576 00:05:07.370 18:54:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57576' 00:05:07.370 18:54:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57576 00:05:07.370 18:54:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57576 00:05:09.947 00:05:09.947 real 0m4.959s 00:05:09.947 user 0m5.374s 00:05:09.947 sys 0m0.801s 00:05:09.947 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.947 ************************************ 00:05:09.947 END TEST exit_on_failed_rpc_init 00:05:09.947 ************************************ 00:05:09.948 18:54:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:09.948 18:54:36 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:09.948 00:05:09.948 real 0m24.825s 00:05:09.948 user 0m23.468s 00:05:09.948 sys 0m2.892s 00:05:09.948 18:54:36 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.948 18:54:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.948 ************************************ 00:05:09.948 END TEST skip_rpc 00:05:09.948 ************************************ 00:05:09.948 18:54:36 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:09.948 18:54:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.948 18:54:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.948 18:54:36 -- common/autotest_common.sh@10 -- # set +x 00:05:09.948 ************************************ 00:05:09.948 START TEST rpc_client 00:05:09.948 ************************************ 00:05:09.948 18:54:36 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:09.948 * Looking for test storage... 00:05:09.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:09.948 18:54:36 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:09.948 18:54:36 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:09.948 18:54:36 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:09.948 18:54:36 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.948 18:54:36 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:09.948 18:54:36 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.948 18:54:36 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:09.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.948 --rc genhtml_branch_coverage=1 00:05:09.948 --rc genhtml_function_coverage=1 00:05:09.948 --rc genhtml_legend=1 00:05:09.948 --rc geninfo_all_blocks=1 00:05:09.948 --rc geninfo_unexecuted_blocks=1 00:05:09.948 00:05:09.948 ' 00:05:09.948 18:54:36 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:09.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.948 --rc genhtml_branch_coverage=1 00:05:09.948 --rc genhtml_function_coverage=1 00:05:09.948 --rc genhtml_legend=1 00:05:09.948 --rc geninfo_all_blocks=1 00:05:09.948 --rc geninfo_unexecuted_blocks=1 00:05:09.948 00:05:09.948 ' 00:05:09.948 18:54:36 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:09.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.948 --rc genhtml_branch_coverage=1 00:05:09.948 --rc genhtml_function_coverage=1 00:05:09.948 --rc genhtml_legend=1 00:05:09.948 --rc geninfo_all_blocks=1 00:05:09.948 --rc geninfo_unexecuted_blocks=1 00:05:09.948 00:05:09.948 ' 00:05:09.948 18:54:36 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:09.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.948 --rc genhtml_branch_coverage=1 00:05:09.948 --rc genhtml_function_coverage=1 00:05:09.948 --rc genhtml_legend=1 00:05:09.948 --rc geninfo_all_blocks=1 00:05:09.948 --rc geninfo_unexecuted_blocks=1 00:05:09.948 00:05:09.948 ' 00:05:09.948 18:54:36 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:10.206 OK 00:05:10.206 18:54:36 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:10.206 00:05:10.206 real 0m0.273s 00:05:10.206 user 0m0.173s 00:05:10.206 sys 0m0.112s 00:05:10.206 18:54:36 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.206 18:54:36 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:10.206 ************************************ 00:05:10.206 END TEST rpc_client 00:05:10.206 ************************************ 00:05:10.206 18:54:36 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:10.206 18:54:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.206 18:54:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.206 18:54:36 -- common/autotest_common.sh@10 -- # set +x 00:05:10.206 ************************************ 00:05:10.206 START TEST json_config 00:05:10.206 ************************************ 00:05:10.206 18:54:36 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:10.206 18:54:36 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:10.206 18:54:36 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:10.206 18:54:36 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:10.206 18:54:36 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:10.206 18:54:36 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.206 18:54:36 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.206 18:54:36 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.206 18:54:36 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.206 18:54:36 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.206 18:54:36 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.206 18:54:36 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.206 18:54:36 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.206 18:54:36 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.206 18:54:36 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.206 18:54:36 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.206 18:54:36 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:10.206 18:54:36 json_config -- scripts/common.sh@345 -- # : 1 00:05:10.206 18:54:36 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.206 18:54:36 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.206 18:54:36 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:10.465 18:54:36 json_config -- scripts/common.sh@353 -- # local d=1 00:05:10.465 18:54:36 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.465 18:54:36 json_config -- scripts/common.sh@355 -- # echo 1 00:05:10.465 18:54:36 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.465 18:54:36 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:10.465 18:54:36 json_config -- scripts/common.sh@353 -- # local d=2 00:05:10.465 18:54:36 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.465 18:54:36 json_config -- scripts/common.sh@355 -- # echo 2 00:05:10.465 18:54:36 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.465 18:54:36 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.465 18:54:36 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.465 18:54:36 json_config -- scripts/common.sh@368 -- # return 0 00:05:10.465 18:54:36 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.465 18:54:36 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:10.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.465 --rc genhtml_branch_coverage=1 00:05:10.465 --rc genhtml_function_coverage=1 00:05:10.465 --rc genhtml_legend=1 00:05:10.465 --rc geninfo_all_blocks=1 00:05:10.465 --rc geninfo_unexecuted_blocks=1 00:05:10.465 00:05:10.465 ' 00:05:10.465 18:54:36 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:10.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.465 --rc genhtml_branch_coverage=1 00:05:10.465 --rc genhtml_function_coverage=1 00:05:10.465 --rc genhtml_legend=1 00:05:10.465 --rc geninfo_all_blocks=1 00:05:10.465 --rc geninfo_unexecuted_blocks=1 00:05:10.465 00:05:10.465 ' 00:05:10.465 18:54:36 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:10.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.465 --rc genhtml_branch_coverage=1 00:05:10.465 --rc genhtml_function_coverage=1 00:05:10.465 --rc genhtml_legend=1 00:05:10.465 --rc geninfo_all_blocks=1 00:05:10.465 --rc geninfo_unexecuted_blocks=1 00:05:10.465 00:05:10.465 ' 00:05:10.465 18:54:36 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:10.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.465 --rc genhtml_branch_coverage=1 00:05:10.465 --rc genhtml_function_coverage=1 00:05:10.465 --rc genhtml_legend=1 00:05:10.465 --rc geninfo_all_blocks=1 00:05:10.465 --rc geninfo_unexecuted_blocks=1 00:05:10.465 00:05:10.465 ' 00:05:10.465 18:54:36 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e5fe3b7-19be-4379-823f-d85818d43e03 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2e5fe3b7-19be-4379-823f-d85818d43e03 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:10.465 18:54:36 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:10.465 18:54:36 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.465 18:54:36 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.465 18:54:36 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.465 18:54:36 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.465 18:54:36 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.465 18:54:36 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.465 18:54:36 json_config -- paths/export.sh@5 -- # export PATH 00:05:10.465 18:54:36 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@51 -- # : 0 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:10.465 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:10.465 18:54:36 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:10.465 18:54:36 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:10.465 18:54:36 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:10.465 18:54:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:10.465 18:54:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:10.465 18:54:36 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:10.465 WARNING: No tests are enabled so not running JSON configuration tests 00:05:10.465 18:54:36 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:10.465 18:54:36 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:10.465 00:05:10.465 real 0m0.198s 00:05:10.465 user 0m0.131s 00:05:10.465 sys 0m0.073s 00:05:10.465 18:54:36 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.465 18:54:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.465 ************************************ 00:05:10.465 END TEST json_config 00:05:10.465 ************************************ 00:05:10.465 18:54:36 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:10.465 18:54:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.465 18:54:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.465 18:54:36 -- common/autotest_common.sh@10 -- # set +x 00:05:10.465 ************************************ 00:05:10.465 START TEST json_config_extra_key 00:05:10.465 ************************************ 00:05:10.465 18:54:36 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:10.465 18:54:36 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:10.465 18:54:36 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:10.465 18:54:36 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:10.465 18:54:37 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:10.465 18:54:37 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.465 18:54:37 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.465 18:54:37 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.465 18:54:37 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.466 18:54:37 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.466 18:54:37 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.466 18:54:37 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.466 18:54:37 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.466 18:54:37 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.466 18:54:37 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.466 18:54:37 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.466 18:54:37 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:10.466 18:54:37 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:10.466 18:54:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.466 18:54:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.466 18:54:37 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:10.466 18:54:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:10.466 18:54:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.466 18:54:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:10.466 18:54:37 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.724 18:54:37 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:10.724 18:54:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:10.724 18:54:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.724 18:54:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:10.724 18:54:37 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.724 18:54:37 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.724 18:54:37 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.724 18:54:37 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:10.724 18:54:37 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.724 18:54:37 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:10.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.724 --rc genhtml_branch_coverage=1 00:05:10.724 --rc genhtml_function_coverage=1 00:05:10.724 --rc genhtml_legend=1 00:05:10.724 --rc geninfo_all_blocks=1 00:05:10.724 --rc geninfo_unexecuted_blocks=1 00:05:10.724 00:05:10.724 ' 00:05:10.724 18:54:37 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:10.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.724 --rc genhtml_branch_coverage=1 00:05:10.724 --rc genhtml_function_coverage=1 00:05:10.724 --rc genhtml_legend=1 00:05:10.724 --rc geninfo_all_blocks=1 00:05:10.724 --rc geninfo_unexecuted_blocks=1 00:05:10.724 00:05:10.724 ' 00:05:10.724 18:54:37 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:10.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.724 --rc genhtml_branch_coverage=1 00:05:10.724 --rc genhtml_function_coverage=1 00:05:10.724 --rc genhtml_legend=1 00:05:10.724 --rc geninfo_all_blocks=1 00:05:10.724 --rc geninfo_unexecuted_blocks=1 00:05:10.724 00:05:10.724 ' 00:05:10.724 18:54:37 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:10.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.724 --rc genhtml_branch_coverage=1 00:05:10.724 --rc genhtml_function_coverage=1 00:05:10.724 --rc genhtml_legend=1 00:05:10.724 --rc geninfo_all_blocks=1 00:05:10.724 --rc geninfo_unexecuted_blocks=1 00:05:10.724 00:05:10.724 ' 00:05:10.724 18:54:37 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e5fe3b7-19be-4379-823f-d85818d43e03 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2e5fe3b7-19be-4379-823f-d85818d43e03 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:10.724 18:54:37 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:10.724 18:54:37 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.724 18:54:37 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.724 18:54:37 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.724 18:54:37 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.724 18:54:37 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.724 18:54:37 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.724 18:54:37 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:10.724 18:54:37 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:10.724 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:10.724 18:54:37 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:10.724 18:54:37 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:10.724 18:54:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:10.724 18:54:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:10.724 18:54:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:10.724 18:54:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:10.724 18:54:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:10.724 18:54:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:10.724 18:54:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:10.724 18:54:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:10.724 18:54:37 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:10.724 INFO: launching applications... 00:05:10.724 18:54:37 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:10.724 18:54:37 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:10.724 18:54:37 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:10.724 18:54:37 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:10.724 18:54:37 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:10.724 18:54:37 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:10.724 18:54:37 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:10.724 18:54:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.724 18:54:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.724 18:54:37 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57815 00:05:10.724 18:54:37 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:10.724 Waiting for target to run... 00:05:10.724 18:54:37 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:10.724 18:54:37 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57815 /var/tmp/spdk_tgt.sock 00:05:10.724 18:54:37 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57815 ']' 00:05:10.724 18:54:37 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:10.724 18:54:37 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:10.724 18:54:37 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:10.724 18:54:37 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.724 18:54:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:10.725 [2024-11-26 18:54:37.257077] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:05:10.725 [2024-11-26 18:54:37.257277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57815 ] 00:05:11.291 [2024-11-26 18:54:37.746892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.291 [2024-11-26 18:54:37.869234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.225 18:54:38 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.225 18:54:38 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:12.225 00:05:12.225 18:54:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:12.225 INFO: shutting down applications... 00:05:12.225 18:54:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:12.225 18:54:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:12.225 18:54:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:12.225 18:54:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:12.225 18:54:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57815 ]] 00:05:12.225 18:54:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57815 00:05:12.225 18:54:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:12.225 18:54:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.225 18:54:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57815 00:05:12.225 18:54:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:12.789 18:54:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:12.789 18:54:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.789 18:54:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57815 00:05:12.789 18:54:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:13.048 18:54:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:13.048 18:54:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.048 18:54:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57815 00:05:13.048 18:54:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:13.615 18:54:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:13.615 18:54:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.615 18:54:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57815 00:05:13.615 18:54:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.181 18:54:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:14.181 18:54:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.181 18:54:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57815 00:05:14.181 18:54:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.745 18:54:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:14.745 18:54:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.745 18:54:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57815 00:05:14.745 18:54:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.311 18:54:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.311 18:54:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.311 18:54:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57815 00:05:15.311 18:54:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:15.311 18:54:41 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:15.311 SPDK target shutdown done 00:05:15.311 18:54:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:15.311 18:54:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:15.311 Success 00:05:15.311 18:54:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:15.311 00:05:15.311 real 0m4.756s 00:05:15.311 user 0m4.369s 00:05:15.311 sys 0m0.700s 00:05:15.311 18:54:41 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.311 18:54:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:15.311 ************************************ 00:05:15.311 END TEST json_config_extra_key 00:05:15.311 ************************************ 00:05:15.311 18:54:41 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:15.311 18:54:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.311 18:54:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.311 18:54:41 -- common/autotest_common.sh@10 -- # set +x 00:05:15.311 ************************************ 00:05:15.311 START TEST alias_rpc 00:05:15.311 ************************************ 00:05:15.311 18:54:41 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:15.311 * Looking for test storage... 00:05:15.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:15.311 18:54:41 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:15.311 18:54:41 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:15.311 18:54:41 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:15.311 18:54:41 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.311 18:54:41 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:15.311 18:54:41 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.311 18:54:41 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:15.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.311 --rc genhtml_branch_coverage=1 00:05:15.311 --rc genhtml_function_coverage=1 00:05:15.311 --rc genhtml_legend=1 00:05:15.311 --rc geninfo_all_blocks=1 00:05:15.311 --rc geninfo_unexecuted_blocks=1 00:05:15.311 00:05:15.311 ' 00:05:15.311 18:54:41 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:15.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.311 --rc genhtml_branch_coverage=1 00:05:15.311 --rc genhtml_function_coverage=1 00:05:15.311 --rc genhtml_legend=1 00:05:15.311 --rc geninfo_all_blocks=1 00:05:15.311 --rc geninfo_unexecuted_blocks=1 00:05:15.311 00:05:15.311 ' 00:05:15.311 18:54:41 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:15.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.311 --rc genhtml_branch_coverage=1 00:05:15.311 --rc genhtml_function_coverage=1 00:05:15.311 --rc genhtml_legend=1 00:05:15.311 --rc geninfo_all_blocks=1 00:05:15.311 --rc geninfo_unexecuted_blocks=1 00:05:15.311 00:05:15.311 ' 00:05:15.311 18:54:41 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:15.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.311 --rc genhtml_branch_coverage=1 00:05:15.311 --rc genhtml_function_coverage=1 00:05:15.311 --rc genhtml_legend=1 00:05:15.311 --rc geninfo_all_blocks=1 00:05:15.311 --rc geninfo_unexecuted_blocks=1 00:05:15.311 00:05:15.311 ' 00:05:15.311 18:54:41 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:15.311 18:54:41 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57921 00:05:15.311 18:54:41 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57921 00:05:15.311 18:54:41 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57921 ']' 00:05:15.311 18:54:41 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.311 18:54:41 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.311 18:54:41 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.311 18:54:41 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.311 18:54:41 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.311 18:54:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.569 [2024-11-26 18:54:42.061799] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:05:15.569 [2024-11-26 18:54:42.062008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57921 ] 00:05:15.826 [2024-11-26 18:54:42.253945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.826 [2024-11-26 18:54:42.427189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.202 18:54:43 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.202 18:54:43 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:17.202 18:54:43 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:17.202 18:54:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57921 00:05:17.202 18:54:43 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57921 ']' 00:05:17.202 18:54:43 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57921 00:05:17.202 18:54:43 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:17.202 18:54:43 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.202 18:54:43 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57921 00:05:17.202 18:54:43 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.202 18:54:43 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.202 killing process with pid 57921 00:05:17.202 18:54:43 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57921' 00:05:17.202 18:54:43 alias_rpc -- common/autotest_common.sh@973 -- # kill 57921 00:05:17.202 18:54:43 alias_rpc -- common/autotest_common.sh@978 -- # wait 57921 00:05:19.732 00:05:19.732 real 0m4.389s 00:05:19.732 user 0m4.525s 00:05:19.732 sys 0m0.738s 00:05:19.732 18:54:46 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.732 18:54:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.732 ************************************ 00:05:19.732 END TEST alias_rpc 00:05:19.732 ************************************ 00:05:19.732 18:54:46 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:19.732 18:54:46 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:19.732 18:54:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.732 18:54:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.732 18:54:46 -- common/autotest_common.sh@10 -- # set +x 00:05:19.732 ************************************ 00:05:19.732 START TEST spdkcli_tcp 00:05:19.732 ************************************ 00:05:19.732 18:54:46 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:19.732 * Looking for test storage... 00:05:19.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:19.732 18:54:46 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.732 18:54:46 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.732 18:54:46 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.732 18:54:46 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.732 18:54:46 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.732 18:54:46 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.732 18:54:46 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.732 18:54:46 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.732 18:54:46 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.732 18:54:46 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.732 18:54:46 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.732 18:54:46 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.732 18:54:46 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.732 18:54:46 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.732 18:54:46 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.732 18:54:46 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:19.732 18:54:46 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:19.990 18:54:46 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.990 18:54:46 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.990 18:54:46 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:19.990 18:54:46 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:19.990 18:54:46 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.990 18:54:46 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:19.990 18:54:46 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.990 18:54:46 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:19.990 18:54:46 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:19.990 18:54:46 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.990 18:54:46 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:19.990 18:54:46 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.990 18:54:46 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.990 18:54:46 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.990 18:54:46 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:19.990 18:54:46 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.990 18:54:46 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.990 --rc genhtml_branch_coverage=1 00:05:19.990 --rc genhtml_function_coverage=1 00:05:19.990 --rc genhtml_legend=1 00:05:19.990 --rc geninfo_all_blocks=1 00:05:19.990 --rc geninfo_unexecuted_blocks=1 00:05:19.990 00:05:19.990 ' 00:05:19.990 18:54:46 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.990 --rc genhtml_branch_coverage=1 00:05:19.990 --rc genhtml_function_coverage=1 00:05:19.990 --rc genhtml_legend=1 00:05:19.990 --rc geninfo_all_blocks=1 00:05:19.990 --rc geninfo_unexecuted_blocks=1 00:05:19.990 00:05:19.990 ' 00:05:19.990 18:54:46 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.990 --rc genhtml_branch_coverage=1 00:05:19.990 --rc genhtml_function_coverage=1 00:05:19.990 --rc genhtml_legend=1 00:05:19.990 --rc geninfo_all_blocks=1 00:05:19.990 --rc geninfo_unexecuted_blocks=1 00:05:19.990 00:05:19.990 ' 00:05:19.990 18:54:46 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.991 --rc genhtml_branch_coverage=1 00:05:19.991 --rc genhtml_function_coverage=1 00:05:19.991 --rc genhtml_legend=1 00:05:19.991 --rc geninfo_all_blocks=1 00:05:19.991 --rc geninfo_unexecuted_blocks=1 00:05:19.991 00:05:19.991 ' 00:05:19.991 18:54:46 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:19.991 18:54:46 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:19.991 18:54:46 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:19.991 18:54:46 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:19.991 18:54:46 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:19.991 18:54:46 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:19.991 18:54:46 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:19.991 18:54:46 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.991 18:54:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:19.991 18:54:46 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58039 00:05:19.991 18:54:46 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58039 00:05:19.991 18:54:46 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:19.991 18:54:46 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58039 ']' 00:05:19.991 18:54:46 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.991 18:54:46 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.991 18:54:46 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.991 18:54:46 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.991 18:54:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:19.991 [2024-11-26 18:54:46.505304] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:05:19.991 [2024-11-26 18:54:46.505500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58039 ] 00:05:20.249 [2024-11-26 18:54:46.694171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.507 [2024-11-26 18:54:46.873414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.507 [2024-11-26 18:54:46.873477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.443 18:54:47 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.443 18:54:47 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:21.443 18:54:47 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58056 00:05:21.443 18:54:47 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:21.443 18:54:47 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:21.702 [ 00:05:21.703 "bdev_malloc_delete", 00:05:21.703 "bdev_malloc_create", 00:05:21.703 "bdev_null_resize", 00:05:21.703 "bdev_null_delete", 00:05:21.703 "bdev_null_create", 00:05:21.703 "bdev_nvme_cuse_unregister", 00:05:21.703 "bdev_nvme_cuse_register", 00:05:21.703 "bdev_opal_new_user", 00:05:21.703 "bdev_opal_set_lock_state", 00:05:21.703 "bdev_opal_delete", 00:05:21.703 "bdev_opal_get_info", 00:05:21.703 "bdev_opal_create", 00:05:21.703 "bdev_nvme_opal_revert", 00:05:21.703 "bdev_nvme_opal_init", 00:05:21.703 "bdev_nvme_send_cmd", 00:05:21.703 "bdev_nvme_set_keys", 00:05:21.703 "bdev_nvme_get_path_iostat", 00:05:21.703 "bdev_nvme_get_mdns_discovery_info", 00:05:21.703 "bdev_nvme_stop_mdns_discovery", 00:05:21.703 "bdev_nvme_start_mdns_discovery", 00:05:21.703 "bdev_nvme_set_multipath_policy", 00:05:21.703 "bdev_nvme_set_preferred_path", 00:05:21.703 "bdev_nvme_get_io_paths", 00:05:21.703 "bdev_nvme_remove_error_injection", 00:05:21.703 "bdev_nvme_add_error_injection", 00:05:21.703 "bdev_nvme_get_discovery_info", 00:05:21.703 "bdev_nvme_stop_discovery", 00:05:21.703 "bdev_nvme_start_discovery", 00:05:21.703 "bdev_nvme_get_controller_health_info", 00:05:21.703 "bdev_nvme_disable_controller", 00:05:21.703 "bdev_nvme_enable_controller", 00:05:21.703 "bdev_nvme_reset_controller", 00:05:21.703 "bdev_nvme_get_transport_statistics", 00:05:21.703 "bdev_nvme_apply_firmware", 00:05:21.703 "bdev_nvme_detach_controller", 00:05:21.703 "bdev_nvme_get_controllers", 00:05:21.703 "bdev_nvme_attach_controller", 00:05:21.703 "bdev_nvme_set_hotplug", 00:05:21.703 "bdev_nvme_set_options", 00:05:21.703 "bdev_passthru_delete", 00:05:21.703 "bdev_passthru_create", 00:05:21.703 "bdev_lvol_set_parent_bdev", 00:05:21.703 "bdev_lvol_set_parent", 00:05:21.703 "bdev_lvol_check_shallow_copy", 00:05:21.703 "bdev_lvol_start_shallow_copy", 00:05:21.703 "bdev_lvol_grow_lvstore", 00:05:21.703 "bdev_lvol_get_lvols", 00:05:21.703 "bdev_lvol_get_lvstores", 00:05:21.703 "bdev_lvol_delete", 00:05:21.703 "bdev_lvol_set_read_only", 00:05:21.703 "bdev_lvol_resize", 00:05:21.703 "bdev_lvol_decouple_parent", 00:05:21.703 "bdev_lvol_inflate", 00:05:21.703 "bdev_lvol_rename", 00:05:21.703 "bdev_lvol_clone_bdev", 00:05:21.703 "bdev_lvol_clone", 00:05:21.703 "bdev_lvol_snapshot", 00:05:21.703 "bdev_lvol_create", 00:05:21.703 "bdev_lvol_delete_lvstore", 00:05:21.703 "bdev_lvol_rename_lvstore", 00:05:21.703 "bdev_lvol_create_lvstore", 00:05:21.703 "bdev_raid_set_options", 00:05:21.703 "bdev_raid_remove_base_bdev", 00:05:21.703 "bdev_raid_add_base_bdev", 00:05:21.703 "bdev_raid_delete", 00:05:21.703 "bdev_raid_create", 00:05:21.703 "bdev_raid_get_bdevs", 00:05:21.703 "bdev_error_inject_error", 00:05:21.703 "bdev_error_delete", 00:05:21.703 "bdev_error_create", 00:05:21.703 "bdev_split_delete", 00:05:21.703 "bdev_split_create", 00:05:21.703 "bdev_delay_delete", 00:05:21.703 "bdev_delay_create", 00:05:21.703 "bdev_delay_update_latency", 00:05:21.703 "bdev_zone_block_delete", 00:05:21.703 "bdev_zone_block_create", 00:05:21.703 "blobfs_create", 00:05:21.703 "blobfs_detect", 00:05:21.703 "blobfs_set_cache_size", 00:05:21.703 "bdev_aio_delete", 00:05:21.703 "bdev_aio_rescan", 00:05:21.703 "bdev_aio_create", 00:05:21.703 "bdev_ftl_set_property", 00:05:21.703 "bdev_ftl_get_properties", 00:05:21.703 "bdev_ftl_get_stats", 00:05:21.703 "bdev_ftl_unmap", 00:05:21.703 "bdev_ftl_unload", 00:05:21.703 "bdev_ftl_delete", 00:05:21.703 "bdev_ftl_load", 00:05:21.703 "bdev_ftl_create", 00:05:21.703 "bdev_virtio_attach_controller", 00:05:21.703 "bdev_virtio_scsi_get_devices", 00:05:21.703 "bdev_virtio_detach_controller", 00:05:21.703 "bdev_virtio_blk_set_hotplug", 00:05:21.703 "bdev_iscsi_delete", 00:05:21.703 "bdev_iscsi_create", 00:05:21.703 "bdev_iscsi_set_options", 00:05:21.703 "accel_error_inject_error", 00:05:21.703 "ioat_scan_accel_module", 00:05:21.703 "dsa_scan_accel_module", 00:05:21.703 "iaa_scan_accel_module", 00:05:21.703 "keyring_file_remove_key", 00:05:21.703 "keyring_file_add_key", 00:05:21.703 "keyring_linux_set_options", 00:05:21.703 "fsdev_aio_delete", 00:05:21.703 "fsdev_aio_create", 00:05:21.703 "iscsi_get_histogram", 00:05:21.703 "iscsi_enable_histogram", 00:05:21.703 "iscsi_set_options", 00:05:21.703 "iscsi_get_auth_groups", 00:05:21.703 "iscsi_auth_group_remove_secret", 00:05:21.703 "iscsi_auth_group_add_secret", 00:05:21.703 "iscsi_delete_auth_group", 00:05:21.703 "iscsi_create_auth_group", 00:05:21.703 "iscsi_set_discovery_auth", 00:05:21.703 "iscsi_get_options", 00:05:21.703 "iscsi_target_node_request_logout", 00:05:21.703 "iscsi_target_node_set_redirect", 00:05:21.703 "iscsi_target_node_set_auth", 00:05:21.703 "iscsi_target_node_add_lun", 00:05:21.703 "iscsi_get_stats", 00:05:21.703 "iscsi_get_connections", 00:05:21.703 "iscsi_portal_group_set_auth", 00:05:21.703 "iscsi_start_portal_group", 00:05:21.703 "iscsi_delete_portal_group", 00:05:21.703 "iscsi_create_portal_group", 00:05:21.703 "iscsi_get_portal_groups", 00:05:21.703 "iscsi_delete_target_node", 00:05:21.703 "iscsi_target_node_remove_pg_ig_maps", 00:05:21.703 "iscsi_target_node_add_pg_ig_maps", 00:05:21.703 "iscsi_create_target_node", 00:05:21.703 "iscsi_get_target_nodes", 00:05:21.703 "iscsi_delete_initiator_group", 00:05:21.703 "iscsi_initiator_group_remove_initiators", 00:05:21.703 "iscsi_initiator_group_add_initiators", 00:05:21.703 "iscsi_create_initiator_group", 00:05:21.703 "iscsi_get_initiator_groups", 00:05:21.703 "nvmf_set_crdt", 00:05:21.703 "nvmf_set_config", 00:05:21.703 "nvmf_set_max_subsystems", 00:05:21.703 "nvmf_stop_mdns_prr", 00:05:21.703 "nvmf_publish_mdns_prr", 00:05:21.703 "nvmf_subsystem_get_listeners", 00:05:21.703 "nvmf_subsystem_get_qpairs", 00:05:21.703 "nvmf_subsystem_get_controllers", 00:05:21.703 "nvmf_get_stats", 00:05:21.703 "nvmf_get_transports", 00:05:21.703 "nvmf_create_transport", 00:05:21.703 "nvmf_get_targets", 00:05:21.703 "nvmf_delete_target", 00:05:21.703 "nvmf_create_target", 00:05:21.703 "nvmf_subsystem_allow_any_host", 00:05:21.703 "nvmf_subsystem_set_keys", 00:05:21.703 "nvmf_subsystem_remove_host", 00:05:21.703 "nvmf_subsystem_add_host", 00:05:21.703 "nvmf_ns_remove_host", 00:05:21.703 "nvmf_ns_add_host", 00:05:21.703 "nvmf_subsystem_remove_ns", 00:05:21.703 "nvmf_subsystem_set_ns_ana_group", 00:05:21.703 "nvmf_subsystem_add_ns", 00:05:21.703 "nvmf_subsystem_listener_set_ana_state", 00:05:21.703 "nvmf_discovery_get_referrals", 00:05:21.703 "nvmf_discovery_remove_referral", 00:05:21.703 "nvmf_discovery_add_referral", 00:05:21.703 "nvmf_subsystem_remove_listener", 00:05:21.703 "nvmf_subsystem_add_listener", 00:05:21.703 "nvmf_delete_subsystem", 00:05:21.704 "nvmf_create_subsystem", 00:05:21.704 "nvmf_get_subsystems", 00:05:21.704 "env_dpdk_get_mem_stats", 00:05:21.704 "nbd_get_disks", 00:05:21.704 "nbd_stop_disk", 00:05:21.704 "nbd_start_disk", 00:05:21.704 "ublk_recover_disk", 00:05:21.704 "ublk_get_disks", 00:05:21.704 "ublk_stop_disk", 00:05:21.704 "ublk_start_disk", 00:05:21.704 "ublk_destroy_target", 00:05:21.704 "ublk_create_target", 00:05:21.704 "virtio_blk_create_transport", 00:05:21.704 "virtio_blk_get_transports", 00:05:21.704 "vhost_controller_set_coalescing", 00:05:21.704 "vhost_get_controllers", 00:05:21.704 "vhost_delete_controller", 00:05:21.704 "vhost_create_blk_controller", 00:05:21.704 "vhost_scsi_controller_remove_target", 00:05:21.704 "vhost_scsi_controller_add_target", 00:05:21.704 "vhost_start_scsi_controller", 00:05:21.704 "vhost_create_scsi_controller", 00:05:21.704 "thread_set_cpumask", 00:05:21.704 "scheduler_set_options", 00:05:21.704 "framework_get_governor", 00:05:21.704 "framework_get_scheduler", 00:05:21.704 "framework_set_scheduler", 00:05:21.704 "framework_get_reactors", 00:05:21.704 "thread_get_io_channels", 00:05:21.704 "thread_get_pollers", 00:05:21.704 "thread_get_stats", 00:05:21.704 "framework_monitor_context_switch", 00:05:21.704 "spdk_kill_instance", 00:05:21.704 "log_enable_timestamps", 00:05:21.704 "log_get_flags", 00:05:21.704 "log_clear_flag", 00:05:21.704 "log_set_flag", 00:05:21.704 "log_get_level", 00:05:21.704 "log_set_level", 00:05:21.704 "log_get_print_level", 00:05:21.704 "log_set_print_level", 00:05:21.704 "framework_enable_cpumask_locks", 00:05:21.704 "framework_disable_cpumask_locks", 00:05:21.704 "framework_wait_init", 00:05:21.704 "framework_start_init", 00:05:21.704 "scsi_get_devices", 00:05:21.704 "bdev_get_histogram", 00:05:21.704 "bdev_enable_histogram", 00:05:21.704 "bdev_set_qos_limit", 00:05:21.704 "bdev_set_qd_sampling_period", 00:05:21.704 "bdev_get_bdevs", 00:05:21.704 "bdev_reset_iostat", 00:05:21.704 "bdev_get_iostat", 00:05:21.704 "bdev_examine", 00:05:21.704 "bdev_wait_for_examine", 00:05:21.704 "bdev_set_options", 00:05:21.704 "accel_get_stats", 00:05:21.704 "accel_set_options", 00:05:21.704 "accel_set_driver", 00:05:21.704 "accel_crypto_key_destroy", 00:05:21.704 "accel_crypto_keys_get", 00:05:21.704 "accel_crypto_key_create", 00:05:21.704 "accel_assign_opc", 00:05:21.704 "accel_get_module_info", 00:05:21.704 "accel_get_opc_assignments", 00:05:21.704 "vmd_rescan", 00:05:21.704 "vmd_remove_device", 00:05:21.704 "vmd_enable", 00:05:21.704 "sock_get_default_impl", 00:05:21.704 "sock_set_default_impl", 00:05:21.704 "sock_impl_set_options", 00:05:21.704 "sock_impl_get_options", 00:05:21.704 "iobuf_get_stats", 00:05:21.704 "iobuf_set_options", 00:05:21.704 "keyring_get_keys", 00:05:21.704 "framework_get_pci_devices", 00:05:21.704 "framework_get_config", 00:05:21.704 "framework_get_subsystems", 00:05:21.704 "fsdev_set_opts", 00:05:21.704 "fsdev_get_opts", 00:05:21.704 "trace_get_info", 00:05:21.704 "trace_get_tpoint_group_mask", 00:05:21.704 "trace_disable_tpoint_group", 00:05:21.704 "trace_enable_tpoint_group", 00:05:21.704 "trace_clear_tpoint_mask", 00:05:21.704 "trace_set_tpoint_mask", 00:05:21.704 "notify_get_notifications", 00:05:21.704 "notify_get_types", 00:05:21.704 "spdk_get_version", 00:05:21.704 "rpc_get_methods" 00:05:21.704 ] 00:05:21.704 18:54:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:21.704 18:54:48 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:21.704 18:54:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.704 18:54:48 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:21.704 18:54:48 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58039 00:05:21.704 18:54:48 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58039 ']' 00:05:21.704 18:54:48 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58039 00:05:21.704 18:54:48 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:21.704 18:54:48 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.704 18:54:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58039 00:05:21.704 18:54:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.704 18:54:48 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.704 killing process with pid 58039 00:05:21.704 18:54:48 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58039' 00:05:21.704 18:54:48 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58039 00:05:21.704 18:54:48 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58039 00:05:24.234 00:05:24.234 real 0m4.475s 00:05:24.234 user 0m7.991s 00:05:24.234 sys 0m0.760s 00:05:24.234 18:54:50 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.234 18:54:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.234 ************************************ 00:05:24.234 END TEST spdkcli_tcp 00:05:24.234 ************************************ 00:05:24.234 18:54:50 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.234 18:54:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.234 18:54:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.234 18:54:50 -- common/autotest_common.sh@10 -- # set +x 00:05:24.234 ************************************ 00:05:24.234 START TEST dpdk_mem_utility 00:05:24.234 ************************************ 00:05:24.234 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.234 * Looking for test storage... 00:05:24.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:24.234 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:24.234 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:24.234 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:24.492 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.492 18:54:50 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:24.492 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.492 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:24.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.492 --rc genhtml_branch_coverage=1 00:05:24.492 --rc genhtml_function_coverage=1 00:05:24.492 --rc genhtml_legend=1 00:05:24.492 --rc geninfo_all_blocks=1 00:05:24.492 --rc geninfo_unexecuted_blocks=1 00:05:24.492 00:05:24.492 ' 00:05:24.492 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:24.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.492 --rc genhtml_branch_coverage=1 00:05:24.492 --rc genhtml_function_coverage=1 00:05:24.492 --rc genhtml_legend=1 00:05:24.492 --rc geninfo_all_blocks=1 00:05:24.492 --rc geninfo_unexecuted_blocks=1 00:05:24.492 00:05:24.492 ' 00:05:24.492 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:24.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.492 --rc genhtml_branch_coverage=1 00:05:24.492 --rc genhtml_function_coverage=1 00:05:24.492 --rc genhtml_legend=1 00:05:24.492 --rc geninfo_all_blocks=1 00:05:24.492 --rc geninfo_unexecuted_blocks=1 00:05:24.492 00:05:24.492 ' 00:05:24.492 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:24.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.492 --rc genhtml_branch_coverage=1 00:05:24.492 --rc genhtml_function_coverage=1 00:05:24.492 --rc genhtml_legend=1 00:05:24.492 --rc geninfo_all_blocks=1 00:05:24.492 --rc geninfo_unexecuted_blocks=1 00:05:24.492 00:05:24.492 ' 00:05:24.492 18:54:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:24.492 18:54:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58161 00:05:24.492 18:54:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.492 18:54:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58161 00:05:24.492 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58161 ']' 00:05:24.492 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.492 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.492 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.492 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.492 18:54:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:24.492 [2024-11-26 18:54:51.025954] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:05:24.492 [2024-11-26 18:54:51.026118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58161 ] 00:05:24.751 [2024-11-26 18:54:51.201169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.751 [2024-11-26 18:54:51.352303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.125 18:54:52 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.125 18:54:52 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:26.125 18:54:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:26.125 18:54:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:26.125 18:54:52 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.125 18:54:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:26.125 { 00:05:26.125 "filename": "/tmp/spdk_mem_dump.txt" 00:05:26.125 } 00:05:26.125 18:54:52 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.125 18:54:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:26.125 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:26.125 1 heaps totaling size 824.000000 MiB 00:05:26.125 size: 824.000000 MiB heap id: 0 00:05:26.125 end heaps---------- 00:05:26.125 9 mempools totaling size 603.782043 MiB 00:05:26.125 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:26.125 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:26.125 size: 100.555481 MiB name: bdev_io_58161 00:05:26.125 size: 50.003479 MiB name: msgpool_58161 00:05:26.125 size: 36.509338 MiB name: fsdev_io_58161 00:05:26.125 size: 21.763794 MiB name: PDU_Pool 00:05:26.125 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:26.125 size: 4.133484 MiB name: evtpool_58161 00:05:26.125 size: 0.026123 MiB name: Session_Pool 00:05:26.125 end mempools------- 00:05:26.125 6 memzones totaling size 4.142822 MiB 00:05:26.125 size: 1.000366 MiB name: RG_ring_0_58161 00:05:26.125 size: 1.000366 MiB name: RG_ring_1_58161 00:05:26.125 size: 1.000366 MiB name: RG_ring_4_58161 00:05:26.125 size: 1.000366 MiB name: RG_ring_5_58161 00:05:26.125 size: 0.125366 MiB name: RG_ring_2_58161 00:05:26.125 size: 0.015991 MiB name: RG_ring_3_58161 00:05:26.125 end memzones------- 00:05:26.125 18:54:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:26.125 heap id: 0 total size: 824.000000 MiB number of busy elements: 313 number of free elements: 18 00:05:26.125 list of free elements. size: 16.781860 MiB 00:05:26.125 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:26.125 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:26.125 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:26.125 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:26.125 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:26.125 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:26.125 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:26.125 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:26.125 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:26.125 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:26.125 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:26.125 element at address: 0x20001b400000 with size: 0.563171 MiB 00:05:26.125 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:26.125 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:26.125 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:26.125 element at address: 0x200012c00000 with size: 0.433472 MiB 00:05:26.125 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:26.125 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:26.125 list of standard malloc elements. size: 199.287231 MiB 00:05:26.125 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:26.125 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:26.125 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:26.125 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:26.125 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:26.125 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:26.125 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:26.125 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:26.125 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:26.125 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:26.125 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:26.125 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:26.125 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:26.125 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:26.125 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:26.125 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:26.125 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:26.125 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:26.125 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:26.125 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:26.126 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:26.126 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:26.127 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:26.127 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:26.127 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:26.127 list of memzone associated elements. size: 607.930908 MiB 00:05:26.127 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:26.127 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:26.127 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:26.127 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:26.127 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:26.127 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58161_0 00:05:26.127 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:26.127 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58161_0 00:05:26.127 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:26.127 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58161_0 00:05:26.127 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:26.127 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:26.127 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:26.127 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:26.127 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:26.127 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58161_0 00:05:26.127 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:26.127 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58161 00:05:26.127 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:26.127 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58161 00:05:26.127 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:26.127 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:26.127 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:26.127 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:26.127 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:26.127 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:26.127 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:26.127 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:26.127 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:26.127 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58161 00:05:26.127 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:26.127 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58161 00:05:26.127 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:26.127 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58161 00:05:26.127 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:26.127 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58161 00:05:26.127 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:26.127 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58161 00:05:26.128 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:26.128 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58161 00:05:26.128 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:26.128 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:26.128 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:26.128 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:26.128 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:26.128 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:26.128 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:26.128 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58161 00:05:26.128 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:26.128 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58161 00:05:26.128 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:26.128 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:26.128 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:26.128 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:26.128 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:26.128 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58161 00:05:26.128 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:26.128 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:26.128 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:26.128 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58161 00:05:26.128 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:26.128 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58161 00:05:26.128 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:26.128 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58161 00:05:26.128 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:26.128 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:26.128 18:54:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:26.128 18:54:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58161 00:05:26.128 18:54:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58161 ']' 00:05:26.128 18:54:52 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58161 00:05:26.128 18:54:52 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:26.128 18:54:52 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.128 18:54:52 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58161 00:05:26.128 18:54:52 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.128 18:54:52 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.128 killing process with pid 58161 00:05:26.128 18:54:52 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58161' 00:05:26.128 18:54:52 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58161 00:05:26.128 18:54:52 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58161 00:05:28.653 00:05:28.653 real 0m4.288s 00:05:28.653 user 0m4.242s 00:05:28.653 sys 0m0.711s 00:05:28.653 18:54:54 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.653 ************************************ 00:05:28.653 END TEST dpdk_mem_utility 00:05:28.653 18:54:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.653 ************************************ 00:05:28.653 18:54:55 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:28.653 18:54:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.653 18:54:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.653 18:54:55 -- common/autotest_common.sh@10 -- # set +x 00:05:28.653 ************************************ 00:05:28.653 START TEST event 00:05:28.653 ************************************ 00:05:28.653 18:54:55 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:28.653 * Looking for test storage... 00:05:28.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:28.653 18:54:55 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:28.653 18:54:55 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:28.653 18:54:55 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:28.653 18:54:55 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:28.653 18:54:55 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.653 18:54:55 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.653 18:54:55 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.653 18:54:55 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.653 18:54:55 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.653 18:54:55 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.653 18:54:55 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.653 18:54:55 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.653 18:54:55 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.653 18:54:55 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.653 18:54:55 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.653 18:54:55 event -- scripts/common.sh@344 -- # case "$op" in 00:05:28.653 18:54:55 event -- scripts/common.sh@345 -- # : 1 00:05:28.653 18:54:55 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.653 18:54:55 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.653 18:54:55 event -- scripts/common.sh@365 -- # decimal 1 00:05:28.653 18:54:55 event -- scripts/common.sh@353 -- # local d=1 00:05:28.654 18:54:55 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.654 18:54:55 event -- scripts/common.sh@355 -- # echo 1 00:05:28.654 18:54:55 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.654 18:54:55 event -- scripts/common.sh@366 -- # decimal 2 00:05:28.654 18:54:55 event -- scripts/common.sh@353 -- # local d=2 00:05:28.654 18:54:55 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.654 18:54:55 event -- scripts/common.sh@355 -- # echo 2 00:05:28.654 18:54:55 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.654 18:54:55 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.654 18:54:55 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.654 18:54:55 event -- scripts/common.sh@368 -- # return 0 00:05:28.654 18:54:55 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.654 18:54:55 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:28.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.654 --rc genhtml_branch_coverage=1 00:05:28.654 --rc genhtml_function_coverage=1 00:05:28.654 --rc genhtml_legend=1 00:05:28.654 --rc geninfo_all_blocks=1 00:05:28.654 --rc geninfo_unexecuted_blocks=1 00:05:28.654 00:05:28.654 ' 00:05:28.654 18:54:55 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:28.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.654 --rc genhtml_branch_coverage=1 00:05:28.654 --rc genhtml_function_coverage=1 00:05:28.654 --rc genhtml_legend=1 00:05:28.654 --rc geninfo_all_blocks=1 00:05:28.654 --rc geninfo_unexecuted_blocks=1 00:05:28.654 00:05:28.654 ' 00:05:28.654 18:54:55 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:28.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.654 --rc genhtml_branch_coverage=1 00:05:28.654 --rc genhtml_function_coverage=1 00:05:28.654 --rc genhtml_legend=1 00:05:28.654 --rc geninfo_all_blocks=1 00:05:28.654 --rc geninfo_unexecuted_blocks=1 00:05:28.654 00:05:28.654 ' 00:05:28.654 18:54:55 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:28.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.654 --rc genhtml_branch_coverage=1 00:05:28.654 --rc genhtml_function_coverage=1 00:05:28.654 --rc genhtml_legend=1 00:05:28.654 --rc geninfo_all_blocks=1 00:05:28.654 --rc geninfo_unexecuted_blocks=1 00:05:28.654 00:05:28.654 ' 00:05:28.654 18:54:55 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:28.654 18:54:55 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:28.654 18:54:55 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.654 18:54:55 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:28.654 18:54:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.654 18:54:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.654 ************************************ 00:05:28.654 START TEST event_perf 00:05:28.654 ************************************ 00:05:28.654 18:54:55 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.912 Running I/O for 1 seconds...[2024-11-26 18:54:55.302258] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:05:28.912 [2024-11-26 18:54:55.302454] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58269 ] 00:05:28.912 [2024-11-26 18:54:55.498940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:29.170 [2024-11-26 18:54:55.662545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.170 [2024-11-26 18:54:55.662633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.170 Running I/O for 1 seconds...[2024-11-26 18:54:55.662728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.170 [2024-11-26 18:54:55.662753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.552 00:05:30.552 lcore 0: 173372 00:05:30.552 lcore 1: 173372 00:05:30.552 lcore 2: 173369 00:05:30.552 lcore 3: 173370 00:05:30.552 done. 00:05:30.552 00:05:30.552 real 0m1.691s 00:05:30.552 user 0m4.407s 00:05:30.552 sys 0m0.151s 00:05:30.552 18:54:56 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.552 18:54:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:30.552 ************************************ 00:05:30.552 END TEST event_perf 00:05:30.552 ************************************ 00:05:30.552 18:54:56 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:30.552 18:54:56 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:30.552 18:54:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.552 18:54:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.552 ************************************ 00:05:30.552 START TEST event_reactor 00:05:30.552 ************************************ 00:05:30.552 18:54:56 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:30.552 [2024-11-26 18:54:57.037039] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:05:30.552 [2024-11-26 18:54:57.037210] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58314 ] 00:05:30.810 [2024-11-26 18:54:57.230568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.810 [2024-11-26 18:54:57.390970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.185 test_start 00:05:32.185 oneshot 00:05:32.185 tick 100 00:05:32.185 tick 100 00:05:32.185 tick 250 00:05:32.185 tick 100 00:05:32.185 tick 100 00:05:32.185 tick 100 00:05:32.185 tick 250 00:05:32.185 tick 500 00:05:32.185 tick 100 00:05:32.185 tick 100 00:05:32.185 tick 250 00:05:32.185 tick 100 00:05:32.185 tick 100 00:05:32.185 test_end 00:05:32.185 00:05:32.185 real 0m1.626s 00:05:32.185 user 0m1.410s 00:05:32.185 sys 0m0.107s 00:05:32.185 18:54:58 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.185 18:54:58 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:32.185 ************************************ 00:05:32.185 END TEST event_reactor 00:05:32.185 ************************************ 00:05:32.185 18:54:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:32.185 18:54:58 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:32.185 18:54:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.185 18:54:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.185 ************************************ 00:05:32.185 START TEST event_reactor_perf 00:05:32.185 ************************************ 00:05:32.185 18:54:58 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:32.185 [2024-11-26 18:54:58.720305] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:05:32.185 [2024-11-26 18:54:58.720506] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58351 ] 00:05:32.444 [2024-11-26 18:54:58.905987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.702 [2024-11-26 18:54:59.078501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.075 test_start 00:05:34.075 test_end 00:05:34.075 Performance: 267290 events per second 00:05:34.075 00:05:34.075 real 0m1.664s 00:05:34.075 user 0m1.438s 00:05:34.075 sys 0m0.116s 00:05:34.075 ************************************ 00:05:34.075 END TEST event_reactor_perf 00:05:34.075 ************************************ 00:05:34.075 18:55:00 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.075 18:55:00 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:34.075 18:55:00 event -- event/event.sh@49 -- # uname -s 00:05:34.075 18:55:00 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:34.075 18:55:00 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:34.075 18:55:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.075 18:55:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.075 18:55:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.075 ************************************ 00:05:34.075 START TEST event_scheduler 00:05:34.075 ************************************ 00:05:34.075 18:55:00 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:34.075 * Looking for test storage... 00:05:34.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:34.075 18:55:00 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.075 18:55:00 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.075 18:55:00 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.075 18:55:00 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:34.075 18:55:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.076 18:55:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:34.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.076 18:55:00 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.076 18:55:00 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.076 18:55:00 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.076 18:55:00 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:34.076 18:55:00 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.076 18:55:00 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.076 --rc genhtml_branch_coverage=1 00:05:34.076 --rc genhtml_function_coverage=1 00:05:34.076 --rc genhtml_legend=1 00:05:34.076 --rc geninfo_all_blocks=1 00:05:34.076 --rc geninfo_unexecuted_blocks=1 00:05:34.076 00:05:34.076 ' 00:05:34.076 18:55:00 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.076 --rc genhtml_branch_coverage=1 00:05:34.076 --rc genhtml_function_coverage=1 00:05:34.076 --rc genhtml_legend=1 00:05:34.076 --rc geninfo_all_blocks=1 00:05:34.076 --rc geninfo_unexecuted_blocks=1 00:05:34.076 00:05:34.076 ' 00:05:34.076 18:55:00 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.076 --rc genhtml_branch_coverage=1 00:05:34.076 --rc genhtml_function_coverage=1 00:05:34.076 --rc genhtml_legend=1 00:05:34.076 --rc geninfo_all_blocks=1 00:05:34.076 --rc geninfo_unexecuted_blocks=1 00:05:34.076 00:05:34.076 ' 00:05:34.076 18:55:00 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.076 --rc genhtml_branch_coverage=1 00:05:34.076 --rc genhtml_function_coverage=1 00:05:34.076 --rc genhtml_legend=1 00:05:34.076 --rc geninfo_all_blocks=1 00:05:34.076 --rc geninfo_unexecuted_blocks=1 00:05:34.076 00:05:34.076 ' 00:05:34.076 18:55:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:34.076 18:55:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58427 00:05:34.076 18:55:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.076 18:55:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58427 00:05:34.076 18:55:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:34.076 18:55:00 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58427 ']' 00:05:34.076 18:55:00 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.076 18:55:00 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.076 18:55:00 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.076 18:55:00 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.076 18:55:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.076 [2024-11-26 18:55:00.678350] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:05:34.076 [2024-11-26 18:55:00.678853] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58427 ] 00:05:34.334 [2024-11-26 18:55:00.873873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:34.592 [2024-11-26 18:55:01.042051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.592 [2024-11-26 18:55:01.042146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.592 [2024-11-26 18:55:01.042232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.592 [2024-11-26 18:55:01.042249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.159 18:55:01 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.159 18:55:01 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:35.159 18:55:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:35.159 18:55:01 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.159 18:55:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.159 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:35.159 POWER: Cannot set governor of lcore 0 to userspace 00:05:35.159 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:35.159 POWER: Cannot set governor of lcore 0 to performance 00:05:35.159 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:35.159 POWER: Cannot set governor of lcore 0 to userspace 00:05:35.159 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:35.159 POWER: Cannot set governor of lcore 0 to userspace 00:05:35.159 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:35.159 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:35.159 POWER: Unable to set Power Management Environment for lcore 0 00:05:35.159 [2024-11-26 18:55:01.727610] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:35.159 [2024-11-26 18:55:01.727641] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:35.159 [2024-11-26 18:55:01.727657] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:35.159 [2024-11-26 18:55:01.727685] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:35.159 [2024-11-26 18:55:01.727699] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:35.159 [2024-11-26 18:55:01.727714] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:35.159 18:55:01 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.159 18:55:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:35.159 18:55:01 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.159 18:55:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.726 [2024-11-26 18:55:02.067566] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:35.726 18:55:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.726 18:55:02 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:35.726 18:55:02 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.726 18:55:02 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.726 18:55:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.726 ************************************ 00:05:35.726 START TEST scheduler_create_thread 00:05:35.726 ************************************ 00:05:35.726 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:35.726 18:55:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:35.726 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.726 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.726 2 00:05:35.726 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.726 18:55:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:35.726 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.726 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.726 3 00:05:35.726 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.726 18:55:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:35.726 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.726 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.726 4 00:05:35.726 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.726 18:55:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:35.726 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.726 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.726 5 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.727 6 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.727 7 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.727 8 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.727 9 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.727 10 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.727 18:55:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.101 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.101 18:55:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:37.101 18:55:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:37.101 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.101 18:55:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.474 ************************************ 00:05:38.474 END TEST scheduler_create_thread 00:05:38.474 ************************************ 00:05:38.474 18:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.474 00:05:38.474 real 0m2.621s 00:05:38.474 user 0m0.007s 00:05:38.474 sys 0m0.013s 00:05:38.474 18:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.474 18:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.474 18:55:04 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:38.474 18:55:04 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58427 00:05:38.474 18:55:04 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58427 ']' 00:05:38.474 18:55:04 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58427 00:05:38.474 18:55:04 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:38.474 18:55:04 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.474 18:55:04 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58427 00:05:38.474 killing process with pid 58427 00:05:38.474 18:55:04 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:38.474 18:55:04 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:38.474 18:55:04 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58427' 00:05:38.474 18:55:04 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58427 00:05:38.474 18:55:04 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58427 00:05:38.732 [2024-11-26 18:55:05.180296] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:40.140 00:05:40.140 real 0m6.002s 00:05:40.140 user 0m10.575s 00:05:40.140 sys 0m0.538s 00:05:40.140 18:55:06 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.140 18:55:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.140 ************************************ 00:05:40.140 END TEST event_scheduler 00:05:40.140 ************************************ 00:05:40.140 18:55:06 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:40.140 18:55:06 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:40.140 18:55:06 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.140 18:55:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.140 18:55:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.140 ************************************ 00:05:40.140 START TEST app_repeat 00:05:40.140 ************************************ 00:05:40.140 18:55:06 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:40.140 18:55:06 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.140 18:55:06 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.140 18:55:06 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:40.140 18:55:06 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.140 18:55:06 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:40.140 18:55:06 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:40.140 18:55:06 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:40.140 Process app_repeat pid: 58538 00:05:40.140 18:55:06 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58538 00:05:40.140 18:55:06 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:40.140 18:55:06 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.140 18:55:06 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58538' 00:05:40.140 18:55:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.140 spdk_app_start Round 0 00:05:40.140 18:55:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:40.140 18:55:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58538 /var/tmp/spdk-nbd.sock 00:05:40.140 18:55:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58538 ']' 00:05:40.140 18:55:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.140 18:55:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.140 18:55:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.140 18:55:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.140 18:55:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.140 [2024-11-26 18:55:06.520827] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:05:40.140 [2024-11-26 18:55:06.521330] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58538 ] 00:05:40.140 [2024-11-26 18:55:06.709625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.397 [2024-11-26 18:55:06.869843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.397 [2024-11-26 18:55:06.869842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.964 18:55:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.964 18:55:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:40.964 18:55:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.222 Malloc0 00:05:41.479 18:55:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.737 Malloc1 00:05:41.737 18:55:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.737 18:55:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.737 18:55:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.737 18:55:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:41.737 18:55:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.737 18:55:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:41.737 18:55:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.737 18:55:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.737 18:55:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.737 18:55:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:41.737 18:55:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.737 18:55:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:41.737 18:55:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:41.737 18:55:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:41.737 18:55:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.737 18:55:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.995 /dev/nbd0 00:05:42.255 18:55:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:42.255 18:55:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:42.255 18:55:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:42.255 18:55:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:42.255 18:55:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:42.255 18:55:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:42.255 18:55:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:42.255 18:55:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:42.255 18:55:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:42.255 18:55:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:42.255 18:55:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.255 1+0 records in 00:05:42.255 1+0 records out 00:05:42.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342493 s, 12.0 MB/s 00:05:42.255 18:55:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.255 18:55:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:42.255 18:55:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.255 18:55:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:42.255 18:55:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:42.255 18:55:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.255 18:55:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.255 18:55:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:42.514 /dev/nbd1 00:05:42.514 18:55:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:42.514 18:55:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:42.514 18:55:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:42.514 18:55:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:42.514 18:55:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:42.514 18:55:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:42.514 18:55:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:42.514 18:55:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:42.514 18:55:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:42.514 18:55:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:42.514 18:55:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.514 1+0 records in 00:05:42.514 1+0 records out 00:05:42.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547675 s, 7.5 MB/s 00:05:42.514 18:55:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.514 18:55:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:42.514 18:55:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.514 18:55:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:42.514 18:55:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:42.514 18:55:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.514 18:55:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.514 18:55:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.514 18:55:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.514 18:55:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.772 18:55:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:42.772 { 00:05:42.772 "nbd_device": "/dev/nbd0", 00:05:42.772 "bdev_name": "Malloc0" 00:05:42.772 }, 00:05:42.772 { 00:05:42.772 "nbd_device": "/dev/nbd1", 00:05:42.772 "bdev_name": "Malloc1" 00:05:42.772 } 00:05:42.772 ]' 00:05:42.772 18:55:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:42.772 { 00:05:42.772 "nbd_device": "/dev/nbd0", 00:05:42.772 "bdev_name": "Malloc0" 00:05:42.772 }, 00:05:42.772 { 00:05:42.772 "nbd_device": "/dev/nbd1", 00:05:42.772 "bdev_name": "Malloc1" 00:05:42.772 } 00:05:42.772 ]' 00:05:42.772 18:55:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.772 18:55:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:42.772 /dev/nbd1' 00:05:42.772 18:55:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.772 18:55:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:42.772 /dev/nbd1' 00:05:42.772 18:55:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:42.772 18:55:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:42.772 18:55:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:42.772 18:55:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:42.772 18:55:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:42.772 18:55:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.772 18:55:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.772 18:55:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:42.772 18:55:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.772 18:55:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:42.772 18:55:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:42.772 256+0 records in 00:05:42.772 256+0 records out 00:05:42.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108101 s, 97.0 MB/s 00:05:42.772 18:55:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.772 18:55:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.030 256+0 records in 00:05:43.031 256+0 records out 00:05:43.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030552 s, 34.3 MB/s 00:05:43.031 18:55:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.031 18:55:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.031 256+0 records in 00:05:43.031 256+0 records out 00:05:43.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0354501 s, 29.6 MB/s 00:05:43.031 18:55:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.031 18:55:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.031 18:55:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.031 18:55:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.031 18:55:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:43.031 18:55:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.031 18:55:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.031 18:55:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.031 18:55:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.031 18:55:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.031 18:55:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.031 18:55:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:43.031 18:55:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.031 18:55:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.031 18:55:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.031 18:55:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.031 18:55:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:43.031 18:55:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.031 18:55:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:43.289 18:55:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:43.289 18:55:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:43.289 18:55:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:43.289 18:55:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.289 18:55:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.289 18:55:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:43.289 18:55:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.289 18:55:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.289 18:55:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.289 18:55:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:43.548 18:55:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:43.548 18:55:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:43.548 18:55:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:43.548 18:55:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.548 18:55:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.548 18:55:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:43.548 18:55:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.548 18:55:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.548 18:55:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.548 18:55:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.548 18:55:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.886 18:55:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:43.886 18:55:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:43.886 18:55:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.886 18:55:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:43.886 18:55:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:43.886 18:55:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.886 18:55:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:43.886 18:55:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:43.886 18:55:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:43.886 18:55:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:43.886 18:55:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:43.886 18:55:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:43.886 18:55:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.157 18:55:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:45.532 [2024-11-26 18:55:11.854454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.532 [2024-11-26 18:55:11.984619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.532 [2024-11-26 18:55:11.984633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.791 [2024-11-26 18:55:12.183002] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.791 [2024-11-26 18:55:12.183159] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.165 18:55:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.165 spdk_app_start Round 1 00:05:47.165 18:55:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:47.165 18:55:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58538 /var/tmp/spdk-nbd.sock 00:05:47.165 18:55:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58538 ']' 00:05:47.165 18:55:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.165 18:55:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.165 18:55:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.165 18:55:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.165 18:55:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.732 18:55:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.732 18:55:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:47.732 18:55:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.990 Malloc0 00:05:47.990 18:55:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.557 Malloc1 00:05:48.557 18:55:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.557 18:55:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.557 18:55:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.557 18:55:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.557 18:55:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.557 18:55:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.557 18:55:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.558 18:55:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.558 18:55:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.558 18:55:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.558 18:55:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.558 18:55:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.558 18:55:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.558 18:55:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.558 18:55:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.558 18:55:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.558 /dev/nbd0 00:05:48.816 18:55:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.816 18:55:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.816 18:55:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:48.816 18:55:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:48.816 18:55:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.816 18:55:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.816 18:55:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:48.816 18:55:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:48.816 18:55:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.816 18:55:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.816 18:55:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.816 1+0 records in 00:05:48.816 1+0 records out 00:05:48.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329894 s, 12.4 MB/s 00:05:48.816 18:55:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.817 18:55:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:48.817 18:55:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.817 18:55:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:48.817 18:55:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:48.817 18:55:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.817 18:55:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.817 18:55:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.075 /dev/nbd1 00:05:49.075 18:55:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.075 18:55:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.075 18:55:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:49.075 18:55:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:49.075 18:55:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:49.075 18:55:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:49.075 18:55:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:49.075 18:55:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:49.075 18:55:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:49.075 18:55:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:49.075 18:55:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.075 1+0 records in 00:05:49.075 1+0 records out 00:05:49.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354014 s, 11.6 MB/s 00:05:49.075 18:55:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.075 18:55:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:49.075 18:55:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.075 18:55:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:49.075 18:55:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:49.075 18:55:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.075 18:55:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.075 18:55:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.075 18:55:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.075 18:55:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.642 18:55:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.642 { 00:05:49.642 "nbd_device": "/dev/nbd0", 00:05:49.642 "bdev_name": "Malloc0" 00:05:49.642 }, 00:05:49.642 { 00:05:49.642 "nbd_device": "/dev/nbd1", 00:05:49.642 "bdev_name": "Malloc1" 00:05:49.642 } 00:05:49.642 ]' 00:05:49.642 18:55:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.642 { 00:05:49.642 "nbd_device": "/dev/nbd0", 00:05:49.642 "bdev_name": "Malloc0" 00:05:49.642 }, 00:05:49.642 { 00:05:49.642 "nbd_device": "/dev/nbd1", 00:05:49.642 "bdev_name": "Malloc1" 00:05:49.642 } 00:05:49.642 ]' 00:05:49.642 18:55:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.642 /dev/nbd1' 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.642 /dev/nbd1' 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.642 256+0 records in 00:05:49.642 256+0 records out 00:05:49.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0091923 s, 114 MB/s 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.642 256+0 records in 00:05:49.642 256+0 records out 00:05:49.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0332804 s, 31.5 MB/s 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.642 256+0 records in 00:05:49.642 256+0 records out 00:05:49.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.037265 s, 28.1 MB/s 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.642 18:55:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.643 18:55:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.643 18:55:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.643 18:55:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.643 18:55:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.643 18:55:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.643 18:55:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.643 18:55:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.643 18:55:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.643 18:55:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.643 18:55:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.643 18:55:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.643 18:55:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.643 18:55:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.901 18:55:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.901 18:55:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.901 18:55:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.901 18:55:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.901 18:55:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.901 18:55:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.901 18:55:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.901 18:55:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.901 18:55:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.901 18:55:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.467 18:55:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.467 18:55:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.467 18:55:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.467 18:55:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.467 18:55:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.467 18:55:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.467 18:55:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.467 18:55:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.467 18:55:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.467 18:55:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.467 18:55:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.726 18:55:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.726 18:55:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.726 18:55:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.726 18:55:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.726 18:55:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.726 18:55:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.726 18:55:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.726 18:55:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.726 18:55:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.726 18:55:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.726 18:55:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.726 18:55:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.726 18:55:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:51.294 18:55:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:52.669 [2024-11-26 18:55:18.919138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.669 [2024-11-26 18:55:19.072669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.669 [2024-11-26 18:55:19.072681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.927 [2024-11-26 18:55:19.292416] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.927 [2024-11-26 18:55:19.292578] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.302 spdk_app_start Round 2 00:05:54.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.303 18:55:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:54.303 18:55:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:54.303 18:55:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58538 /var/tmp/spdk-nbd.sock 00:05:54.303 18:55:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58538 ']' 00:05:54.303 18:55:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.303 18:55:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.303 18:55:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.303 18:55:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.303 18:55:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.561 18:55:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.561 18:55:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:54.561 18:55:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.128 Malloc0 00:05:55.128 18:55:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.387 Malloc1 00:05:55.387 18:55:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.387 18:55:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.387 18:55:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.387 18:55:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.387 18:55:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.387 18:55:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.387 18:55:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.387 18:55:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.387 18:55:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.387 18:55:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.387 18:55:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.387 18:55:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.387 18:55:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:55.387 18:55:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.387 18:55:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.387 18:55:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:55.646 /dev/nbd0 00:05:55.646 18:55:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:55.646 18:55:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:55.646 18:55:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:55.646 18:55:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:55.646 18:55:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:55.646 18:55:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:55.646 18:55:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:55.646 18:55:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:55.646 18:55:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:55.646 18:55:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:55.646 18:55:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:55.646 1+0 records in 00:05:55.646 1+0 records out 00:05:55.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367399 s, 11.1 MB/s 00:05:55.646 18:55:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.904 18:55:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:55.904 18:55:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:55.904 18:55:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:55.904 18:55:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:55.904 18:55:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:55.904 18:55:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.904 18:55:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.163 /dev/nbd1 00:05:56.163 18:55:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.163 18:55:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.163 18:55:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:56.163 18:55:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:56.163 18:55:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.163 18:55:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.163 18:55:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:56.163 18:55:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:56.163 18:55:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.163 18:55:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.163 18:55:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.163 1+0 records in 00:05:56.163 1+0 records out 00:05:56.163 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327529 s, 12.5 MB/s 00:05:56.163 18:55:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.163 18:55:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:56.163 18:55:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.163 18:55:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.163 18:55:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:56.163 18:55:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.163 18:55:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.163 18:55:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.163 18:55:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.163 18:55:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.422 18:55:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.422 { 00:05:56.422 "nbd_device": "/dev/nbd0", 00:05:56.422 "bdev_name": "Malloc0" 00:05:56.422 }, 00:05:56.422 { 00:05:56.422 "nbd_device": "/dev/nbd1", 00:05:56.422 "bdev_name": "Malloc1" 00:05:56.422 } 00:05:56.422 ]' 00:05:56.422 18:55:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.422 { 00:05:56.422 "nbd_device": "/dev/nbd0", 00:05:56.422 "bdev_name": "Malloc0" 00:05:56.422 }, 00:05:56.422 { 00:05:56.422 "nbd_device": "/dev/nbd1", 00:05:56.422 "bdev_name": "Malloc1" 00:05:56.422 } 00:05:56.422 ]' 00:05:56.422 18:55:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.422 18:55:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.422 /dev/nbd1' 00:05:56.422 18:55:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.422 /dev/nbd1' 00:05:56.422 18:55:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.422 18:55:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.422 18:55:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.422 18:55:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.422 18:55:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.422 18:55:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.422 18:55:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.422 18:55:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.422 18:55:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.422 18:55:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.422 18:55:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.422 18:55:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.422 256+0 records in 00:05:56.422 256+0 records out 00:05:56.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00771522 s, 136 MB/s 00:05:56.422 18:55:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.422 18:55:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.422 256+0 records in 00:05:56.422 256+0 records out 00:05:56.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0331522 s, 31.6 MB/s 00:05:56.422 18:55:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.422 18:55:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.680 256+0 records in 00:05:56.680 256+0 records out 00:05:56.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0384777 s, 27.3 MB/s 00:05:56.680 18:55:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.680 18:55:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.680 18:55:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.680 18:55:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.680 18:55:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.680 18:55:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.680 18:55:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.681 18:55:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.681 18:55:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.681 18:55:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.681 18:55:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.681 18:55:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.681 18:55:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.681 18:55:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.681 18:55:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.681 18:55:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.681 18:55:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:56.681 18:55:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.681 18:55:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.939 18:55:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.939 18:55:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.939 18:55:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.939 18:55:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.939 18:55:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.939 18:55:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.939 18:55:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.939 18:55:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.939 18:55:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.939 18:55:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.197 18:55:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.198 18:55:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.198 18:55:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.198 18:55:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.198 18:55:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.198 18:55:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.198 18:55:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.198 18:55:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.198 18:55:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.198 18:55:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.198 18:55:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.765 18:55:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.765 18:55:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.765 18:55:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.765 18:55:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.766 18:55:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.766 18:55:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.766 18:55:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:57.766 18:55:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.766 18:55:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.766 18:55:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.766 18:55:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.766 18:55:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.766 18:55:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:58.024 18:55:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:59.401 [2024-11-26 18:55:25.835394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.401 [2024-11-26 18:55:25.987326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.401 [2024-11-26 18:55:25.987371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.660 [2024-11-26 18:55:26.206848] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.660 [2024-11-26 18:55:26.206960] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:01.036 18:55:27 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58538 /var/tmp/spdk-nbd.sock 00:06:01.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.036 18:55:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58538 ']' 00:06:01.036 18:55:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.036 18:55:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.036 18:55:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.037 18:55:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.037 18:55:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.603 18:55:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.603 18:55:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:01.603 18:55:27 event.app_repeat -- event/event.sh@39 -- # killprocess 58538 00:06:01.603 18:55:27 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58538 ']' 00:06:01.603 18:55:27 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58538 00:06:01.603 18:55:27 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:01.603 18:55:27 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.603 18:55:27 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58538 00:06:01.603 killing process with pid 58538 00:06:01.603 18:55:27 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.603 18:55:27 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.603 18:55:27 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58538' 00:06:01.603 18:55:27 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58538 00:06:01.603 18:55:27 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58538 00:06:02.538 spdk_app_start is called in Round 0. 00:06:02.538 Shutdown signal received, stop current app iteration 00:06:02.538 Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 reinitialization... 00:06:02.538 spdk_app_start is called in Round 1. 00:06:02.538 Shutdown signal received, stop current app iteration 00:06:02.538 Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 reinitialization... 00:06:02.538 spdk_app_start is called in Round 2. 00:06:02.538 Shutdown signal received, stop current app iteration 00:06:02.538 Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 reinitialization... 00:06:02.538 spdk_app_start is called in Round 3. 00:06:02.538 Shutdown signal received, stop current app iteration 00:06:02.538 ************************************ 00:06:02.538 END TEST app_repeat 00:06:02.538 ************************************ 00:06:02.538 18:55:29 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:02.538 18:55:29 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:02.538 00:06:02.538 real 0m22.627s 00:06:02.538 user 0m50.176s 00:06:02.538 sys 0m3.317s 00:06:02.538 18:55:29 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.538 18:55:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.538 18:55:29 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:02.538 18:55:29 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:02.538 18:55:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.538 18:55:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.538 18:55:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.538 ************************************ 00:06:02.538 START TEST cpu_locks 00:06:02.538 ************************************ 00:06:02.538 18:55:29 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:02.795 * Looking for test storage... 00:06:02.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:02.795 18:55:29 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.795 18:55:29 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.795 18:55:29 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.795 18:55:29 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.795 18:55:29 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:02.795 18:55:29 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.795 18:55:29 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.795 --rc genhtml_branch_coverage=1 00:06:02.795 --rc genhtml_function_coverage=1 00:06:02.795 --rc genhtml_legend=1 00:06:02.795 --rc geninfo_all_blocks=1 00:06:02.795 --rc geninfo_unexecuted_blocks=1 00:06:02.795 00:06:02.795 ' 00:06:02.795 18:55:29 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.795 --rc genhtml_branch_coverage=1 00:06:02.795 --rc genhtml_function_coverage=1 00:06:02.795 --rc genhtml_legend=1 00:06:02.795 --rc geninfo_all_blocks=1 00:06:02.795 --rc geninfo_unexecuted_blocks=1 00:06:02.795 00:06:02.795 ' 00:06:02.795 18:55:29 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.795 --rc genhtml_branch_coverage=1 00:06:02.795 --rc genhtml_function_coverage=1 00:06:02.795 --rc genhtml_legend=1 00:06:02.795 --rc geninfo_all_blocks=1 00:06:02.795 --rc geninfo_unexecuted_blocks=1 00:06:02.795 00:06:02.795 ' 00:06:02.796 18:55:29 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.796 --rc genhtml_branch_coverage=1 00:06:02.796 --rc genhtml_function_coverage=1 00:06:02.796 --rc genhtml_legend=1 00:06:02.796 --rc geninfo_all_blocks=1 00:06:02.796 --rc geninfo_unexecuted_blocks=1 00:06:02.796 00:06:02.796 ' 00:06:02.796 18:55:29 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:02.796 18:55:29 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:02.796 18:55:29 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:02.796 18:55:29 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:02.796 18:55:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.796 18:55:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.796 18:55:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.796 ************************************ 00:06:02.796 START TEST default_locks 00:06:02.796 ************************************ 00:06:02.796 18:55:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:02.796 18:55:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59026 00:06:02.796 18:55:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59026 00:06:02.796 18:55:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59026 ']' 00:06:02.796 18:55:29 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.796 18:55:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.796 18:55:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.796 18:55:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.796 18:55:29 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.796 18:55:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.054 [2024-11-26 18:55:29.472786] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:06:03.054 [2024-11-26 18:55:29.472969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59026 ] 00:06:03.054 [2024-11-26 18:55:29.675267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.318 [2024-11-26 18:55:29.836754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.252 18:55:30 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.252 18:55:30 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:04.252 18:55:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59026 00:06:04.252 18:55:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59026 00:06:04.252 18:55:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.819 18:55:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59026 00:06:04.819 18:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59026 ']' 00:06:04.819 18:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59026 00:06:04.819 18:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:04.819 18:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.819 18:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59026 00:06:04.819 18:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.819 killing process with pid 59026 00:06:04.819 18:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.819 18:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59026' 00:06:04.819 18:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59026 00:06:04.819 18:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59026 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59026 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59026 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59026 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59026 ']' 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.354 ERROR: process (pid: 59026) is no longer running 00:06:07.354 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59026) - No such process 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:07.354 00:06:07.354 real 0m4.644s 00:06:07.354 user 0m4.586s 00:06:07.354 sys 0m0.879s 00:06:07.354 ************************************ 00:06:07.354 END TEST default_locks 00:06:07.354 ************************************ 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.354 18:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.613 18:55:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:07.613 18:55:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.613 18:55:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.613 18:55:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.613 ************************************ 00:06:07.613 START TEST default_locks_via_rpc 00:06:07.613 ************************************ 00:06:07.613 18:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:07.613 18:55:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59103 00:06:07.613 18:55:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59103 00:06:07.613 18:55:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.613 18:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59103 ']' 00:06:07.613 18:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.613 18:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.613 18:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.613 18:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.613 18:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.613 [2024-11-26 18:55:34.184770] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:06:07.613 [2024-11-26 18:55:34.184993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59103 ] 00:06:07.873 [2024-11-26 18:55:34.382142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.133 [2024-11-26 18:55:34.577368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.114 18:55:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.114 18:55:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:09.114 18:55:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:09.114 18:55:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.114 18:55:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.114 18:55:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.114 18:55:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:09.114 18:55:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:09.114 18:55:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:09.114 18:55:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:09.114 18:55:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:09.114 18:55:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.114 18:55:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.114 18:55:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.114 18:55:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59103 00:06:09.114 18:55:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59103 00:06:09.114 18:55:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.681 18:55:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59103 00:06:09.681 18:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59103 ']' 00:06:09.681 18:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59103 00:06:09.681 18:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:09.681 18:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.681 18:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59103 00:06:09.681 killing process with pid 59103 00:06:09.681 18:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.681 18:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.681 18:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59103' 00:06:09.681 18:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59103 00:06:09.681 18:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59103 00:06:12.212 ************************************ 00:06:12.212 END TEST default_locks_via_rpc 00:06:12.212 ************************************ 00:06:12.212 00:06:12.212 real 0m4.789s 00:06:12.212 user 0m4.711s 00:06:12.212 sys 0m0.966s 00:06:12.212 18:55:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.212 18:55:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.471 18:55:38 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:12.471 18:55:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.471 18:55:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.471 18:55:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.471 ************************************ 00:06:12.471 START TEST non_locking_app_on_locked_coremask 00:06:12.471 ************************************ 00:06:12.471 18:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:12.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.471 18:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59188 00:06:12.471 18:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.471 18:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59188 /var/tmp/spdk.sock 00:06:12.471 18:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59188 ']' 00:06:12.471 18:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.471 18:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.471 18:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.471 18:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.471 18:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.471 [2024-11-26 18:55:39.023471] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:06:12.471 [2024-11-26 18:55:39.023908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59188 ] 00:06:12.729 [2024-11-26 18:55:39.206723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.987 [2024-11-26 18:55:39.382528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.936 18:55:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.936 18:55:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:13.936 18:55:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59204 00:06:13.936 18:55:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59204 /var/tmp/spdk2.sock 00:06:13.936 18:55:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:13.936 18:55:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59204 ']' 00:06:13.936 18:55:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.936 18:55:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.936 18:55:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.936 18:55:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.936 18:55:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.252 [2024-11-26 18:55:40.563279] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:06:14.252 [2024-11-26 18:55:40.563813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59204 ] 00:06:14.252 [2024-11-26 18:55:40.770575] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.252 [2024-11-26 18:55:40.770676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.510 [2024-11-26 18:55:41.098115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.034 18:55:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.034 18:55:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:17.034 18:55:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59188 00:06:17.034 18:55:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59188 00:06:17.034 18:55:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.598 18:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59188 00:06:17.598 18:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59188 ']' 00:06:17.598 18:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59188 00:06:17.598 18:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:17.598 18:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.598 18:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59188 00:06:17.856 killing process with pid 59188 00:06:17.856 18:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.856 18:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.856 18:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59188' 00:06:17.856 18:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59188 00:06:17.856 18:55:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59188 00:06:23.117 18:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59204 00:06:23.117 18:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59204 ']' 00:06:23.117 18:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59204 00:06:23.117 18:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:23.117 18:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.117 18:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59204 00:06:23.117 killing process with pid 59204 00:06:23.117 18:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.117 18:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.117 18:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59204' 00:06:23.117 18:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59204 00:06:23.117 18:55:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59204 00:06:25.677 ************************************ 00:06:25.677 END TEST non_locking_app_on_locked_coremask 00:06:25.677 ************************************ 00:06:25.677 00:06:25.677 real 0m12.828s 00:06:25.677 user 0m13.280s 00:06:25.677 sys 0m1.673s 00:06:25.677 18:55:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.677 18:55:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.677 18:55:51 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:25.677 18:55:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.677 18:55:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.677 18:55:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.677 ************************************ 00:06:25.677 START TEST locking_app_on_unlocked_coremask 00:06:25.677 ************************************ 00:06:25.677 18:55:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:25.677 18:55:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59370 00:06:25.677 18:55:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:25.677 18:55:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59370 /var/tmp/spdk.sock 00:06:25.677 18:55:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59370 ']' 00:06:25.677 18:55:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.677 18:55:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.677 18:55:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.677 18:55:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.677 18:55:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.677 [2024-11-26 18:55:51.902596] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:06:25.677 [2024-11-26 18:55:51.903063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59370 ] 00:06:25.677 [2024-11-26 18:55:52.082439] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:25.677 [2024-11-26 18:55:52.082576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.677 [2024-11-26 18:55:52.233468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.687 18:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.687 18:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:26.687 18:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59386 00:06:26.687 18:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:26.687 18:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59386 /var/tmp/spdk2.sock 00:06:26.687 18:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59386 ']' 00:06:26.687 18:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.687 18:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.687 18:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.687 18:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.688 18:55:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.956 [2024-11-26 18:55:53.392127] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:06:26.956 [2024-11-26 18:55:53.392716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59386 ] 00:06:27.214 [2024-11-26 18:55:53.609448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.472 [2024-11-26 18:55:53.914688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.002 18:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.002 18:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:30.002 18:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59386 00:06:30.002 18:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59386 00:06:30.002 18:55:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.996 18:55:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59370 00:06:30.996 18:55:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59370 ']' 00:06:30.996 18:55:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59370 00:06:30.996 18:55:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:30.996 18:55:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.996 18:55:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59370 00:06:30.996 killing process with pid 59370 00:06:30.996 18:55:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.996 18:55:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.996 18:55:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59370' 00:06:30.996 18:55:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59370 00:06:30.996 18:55:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59370 00:06:35.180 18:56:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59386 00:06:35.180 18:56:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59386 ']' 00:06:35.180 18:56:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59386 00:06:35.180 18:56:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:35.180 18:56:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.180 18:56:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59386 00:06:35.454 18:56:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.454 killing process with pid 59386 00:06:35.454 18:56:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.454 18:56:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59386' 00:06:35.454 18:56:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59386 00:06:35.454 18:56:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59386 00:06:37.983 00:06:37.983 real 0m12.336s 00:06:37.983 user 0m13.035s 00:06:37.983 sys 0m1.704s 00:06:37.983 18:56:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.983 ************************************ 00:06:37.983 END TEST locking_app_on_unlocked_coremask 00:06:37.983 ************************************ 00:06:37.983 18:56:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.983 18:56:04 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:37.983 18:56:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.983 18:56:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.983 18:56:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.983 ************************************ 00:06:37.983 START TEST locking_app_on_locked_coremask 00:06:37.983 ************************************ 00:06:37.983 18:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:37.983 18:56:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59546 00:06:37.983 18:56:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59546 /var/tmp/spdk.sock 00:06:37.983 18:56:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.983 18:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59546 ']' 00:06:37.983 18:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.983 18:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.983 18:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.983 18:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.983 18:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.983 [2024-11-26 18:56:04.251438] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:06:37.983 [2024-11-26 18:56:04.251620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59546 ] 00:06:37.983 [2024-11-26 18:56:04.429458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.983 [2024-11-26 18:56:04.559877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.994 18:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.994 18:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:38.994 18:56:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59562 00:06:38.994 18:56:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:38.994 18:56:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59562 /var/tmp/spdk2.sock 00:06:38.994 18:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:38.994 18:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59562 /var/tmp/spdk2.sock 00:06:38.994 18:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:38.994 18:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.994 18:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:38.994 18:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.994 18:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59562 /var/tmp/spdk2.sock 00:06:38.994 18:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59562 ']' 00:06:38.994 18:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.994 18:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.994 18:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.994 18:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.994 18:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.994 [2024-11-26 18:56:05.601570] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:06:38.994 [2024-11-26 18:56:05.601842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59562 ] 00:06:39.252 [2024-11-26 18:56:05.800193] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59546 has claimed it. 00:06:39.252 [2024-11-26 18:56:05.800328] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.818 ERROR: process (pid: 59562) is no longer running 00:06:39.818 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59562) - No such process 00:06:39.818 18:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.818 18:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:39.818 18:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:39.818 18:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:39.818 18:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:39.818 18:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:39.818 18:56:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59546 00:06:39.818 18:56:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59546 00:06:39.818 18:56:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.384 18:56:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59546 00:06:40.384 18:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59546 ']' 00:06:40.384 18:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59546 00:06:40.384 18:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:40.384 18:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.384 18:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59546 00:06:40.384 18:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.384 killing process with pid 59546 00:06:40.384 18:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.384 18:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59546' 00:06:40.384 18:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59546 00:06:40.384 18:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59546 00:06:42.917 00:06:42.917 real 0m4.965s 00:06:42.917 user 0m5.329s 00:06:42.917 sys 0m1.010s 00:06:42.917 18:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.917 18:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.917 ************************************ 00:06:42.917 END TEST locking_app_on_locked_coremask 00:06:42.917 ************************************ 00:06:42.917 18:56:09 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:42.917 18:56:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.917 18:56:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.917 18:56:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.917 ************************************ 00:06:42.917 START TEST locking_overlapped_coremask 00:06:42.917 ************************************ 00:06:42.917 18:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:42.917 18:56:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59632 00:06:42.917 18:56:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59632 /var/tmp/spdk.sock 00:06:42.917 18:56:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:42.917 18:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59632 ']' 00:06:42.917 18:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.917 18:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.917 18:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.917 18:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.917 18:56:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.917 [2024-11-26 18:56:09.288302] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:06:42.917 [2024-11-26 18:56:09.288503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59632 ] 00:06:42.917 [2024-11-26 18:56:09.481892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.175 [2024-11-26 18:56:09.652767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.175 [2024-11-26 18:56:09.652930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.175 [2024-11-26 18:56:09.652974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.107 18:56:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.107 18:56:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:44.107 18:56:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59655 00:06:44.107 18:56:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:44.107 18:56:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59655 /var/tmp/spdk2.sock 00:06:44.107 18:56:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:44.107 18:56:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59655 /var/tmp/spdk2.sock 00:06:44.107 18:56:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:44.107 18:56:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.107 18:56:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:44.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.107 18:56:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.107 18:56:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59655 /var/tmp/spdk2.sock 00:06:44.107 18:56:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59655 ']' 00:06:44.107 18:56:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.107 18:56:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.107 18:56:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.107 18:56:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.107 18:56:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.107 [2024-11-26 18:56:10.688514] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:06:44.107 [2024-11-26 18:56:10.689410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59655 ] 00:06:44.365 [2024-11-26 18:56:10.907138] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59632 has claimed it. 00:06:44.365 [2024-11-26 18:56:10.907245] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:44.931 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59655) - No such process 00:06:44.931 ERROR: process (pid: 59655) is no longer running 00:06:44.931 18:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.931 18:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:44.931 18:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:44.931 18:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:44.931 18:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:44.931 18:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:44.931 18:56:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:44.931 18:56:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:44.931 18:56:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:44.931 18:56:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:44.931 18:56:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59632 00:06:44.931 18:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59632 ']' 00:06:44.931 18:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59632 00:06:44.931 18:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:44.931 18:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.931 18:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59632 00:06:44.931 18:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.931 18:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.931 18:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59632' 00:06:44.931 killing process with pid 59632 00:06:44.931 18:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59632 00:06:44.931 18:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59632 00:06:47.470 00:06:47.470 real 0m4.534s 00:06:47.470 user 0m12.159s 00:06:47.470 sys 0m0.780s 00:06:47.470 18:56:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.470 18:56:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.470 ************************************ 00:06:47.470 END TEST locking_overlapped_coremask 00:06:47.471 ************************************ 00:06:47.471 18:56:13 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:47.471 18:56:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.471 18:56:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.471 18:56:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.471 ************************************ 00:06:47.471 START TEST locking_overlapped_coremask_via_rpc 00:06:47.471 ************************************ 00:06:47.471 18:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:47.471 18:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59719 00:06:47.471 18:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:47.471 18:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59719 /var/tmp/spdk.sock 00:06:47.471 18:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59719 ']' 00:06:47.471 18:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.471 18:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.471 18:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.471 18:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.471 18:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.471 [2024-11-26 18:56:13.877357] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:06:47.471 [2024-11-26 18:56:13.877562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59719 ] 00:06:47.471 [2024-11-26 18:56:14.064749] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.471 [2024-11-26 18:56:14.064840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.728 [2024-11-26 18:56:14.199837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.728 [2024-11-26 18:56:14.199966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.728 [2024-11-26 18:56:14.199979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.660 18:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.660 18:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:48.660 18:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:48.660 18:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59737 00:06:48.660 18:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59737 /var/tmp/spdk2.sock 00:06:48.660 18:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59737 ']' 00:06:48.660 18:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.660 18:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.660 18:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.660 18:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.660 18:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.660 [2024-11-26 18:56:15.204936] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:06:48.660 [2024-11-26 18:56:15.205126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59737 ] 00:06:48.918 [2024-11-26 18:56:15.407644] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.919 [2024-11-26 18:56:15.407754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.176 [2024-11-26 18:56:15.725215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.176 [2024-11-26 18:56:15.729473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.176 [2024-11-26 18:56:15.729481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.709 [2024-11-26 18:56:18.073669] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59719 has claimed it. 00:06:51.709 request: 00:06:51.709 { 00:06:51.709 "method": "framework_enable_cpumask_locks", 00:06:51.709 "req_id": 1 00:06:51.709 } 00:06:51.709 Got JSON-RPC error response 00:06:51.709 response: 00:06:51.709 { 00:06:51.709 "code": -32603, 00:06:51.709 "message": "Failed to claim CPU core: 2" 00:06:51.709 } 00:06:51.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59719 /var/tmp/spdk.sock 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59719 ']' 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.709 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.967 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.967 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:51.967 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59737 /var/tmp/spdk2.sock 00:06:51.967 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59737 ']' 00:06:51.967 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.967 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.967 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.967 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.967 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.226 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.227 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:52.227 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:52.227 ************************************ 00:06:52.227 END TEST locking_overlapped_coremask_via_rpc 00:06:52.227 ************************************ 00:06:52.227 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:52.227 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:52.227 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:52.227 00:06:52.227 real 0m4.944s 00:06:52.227 user 0m1.834s 00:06:52.227 sys 0m0.253s 00:06:52.227 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.227 18:56:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.227 18:56:18 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:52.227 18:56:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59719 ]] 00:06:52.227 18:56:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59719 00:06:52.227 18:56:18 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59719 ']' 00:06:52.227 18:56:18 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59719 00:06:52.227 18:56:18 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:52.227 18:56:18 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.227 18:56:18 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59719 00:06:52.227 killing process with pid 59719 00:06:52.227 18:56:18 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.227 18:56:18 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.227 18:56:18 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59719' 00:06:52.227 18:56:18 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59719 00:06:52.227 18:56:18 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59719 00:06:54.766 18:56:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59737 ]] 00:06:54.766 18:56:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59737 00:06:54.766 18:56:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59737 ']' 00:06:54.766 18:56:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59737 00:06:54.766 18:56:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:54.766 18:56:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.766 18:56:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59737 00:06:54.766 killing process with pid 59737 00:06:54.766 18:56:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:54.766 18:56:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:54.766 18:56:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59737' 00:06:54.766 18:56:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59737 00:06:54.766 18:56:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59737 00:06:57.346 18:56:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.346 18:56:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:57.346 18:56:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59719 ]] 00:06:57.346 18:56:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59719 00:06:57.346 18:56:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59719 ']' 00:06:57.346 18:56:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59719 00:06:57.346 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59719) - No such process 00:06:57.346 Process with pid 59719 is not found 00:06:57.346 18:56:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59719 is not found' 00:06:57.346 18:56:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59737 ]] 00:06:57.346 Process with pid 59737 is not found 00:06:57.346 18:56:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59737 00:06:57.346 18:56:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59737 ']' 00:06:57.346 18:56:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59737 00:06:57.346 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59737) - No such process 00:06:57.346 18:56:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59737 is not found' 00:06:57.346 18:56:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.346 ************************************ 00:06:57.346 END TEST cpu_locks 00:06:57.346 ************************************ 00:06:57.346 00:06:57.346 real 0m54.596s 00:06:57.346 user 1m33.018s 00:06:57.346 sys 0m8.650s 00:06:57.346 18:56:23 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.346 18:56:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.346 ************************************ 00:06:57.346 END TEST event 00:06:57.346 ************************************ 00:06:57.346 00:06:57.346 real 1m28.730s 00:06:57.346 user 2m41.266s 00:06:57.346 sys 0m13.141s 00:06:57.346 18:56:23 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.346 18:56:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.346 18:56:23 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:57.346 18:56:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.346 18:56:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.346 18:56:23 -- common/autotest_common.sh@10 -- # set +x 00:06:57.346 ************************************ 00:06:57.346 START TEST thread 00:06:57.346 ************************************ 00:06:57.346 18:56:23 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:57.346 * Looking for test storage... 00:06:57.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:57.346 18:56:23 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:57.346 18:56:23 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:57.346 18:56:23 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:57.604 18:56:23 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:57.604 18:56:23 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.604 18:56:23 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.604 18:56:23 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.604 18:56:23 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.604 18:56:23 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.604 18:56:23 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.604 18:56:23 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.604 18:56:23 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.604 18:56:23 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.604 18:56:23 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.604 18:56:23 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.604 18:56:23 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:57.604 18:56:23 thread -- scripts/common.sh@345 -- # : 1 00:06:57.604 18:56:23 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.604 18:56:23 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.604 18:56:23 thread -- scripts/common.sh@365 -- # decimal 1 00:06:57.604 18:56:23 thread -- scripts/common.sh@353 -- # local d=1 00:06:57.604 18:56:24 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.604 18:56:24 thread -- scripts/common.sh@355 -- # echo 1 00:06:57.604 18:56:24 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.604 18:56:24 thread -- scripts/common.sh@366 -- # decimal 2 00:06:57.604 18:56:24 thread -- scripts/common.sh@353 -- # local d=2 00:06:57.604 18:56:24 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.604 18:56:24 thread -- scripts/common.sh@355 -- # echo 2 00:06:57.604 18:56:24 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.604 18:56:24 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.604 18:56:24 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.604 18:56:24 thread -- scripts/common.sh@368 -- # return 0 00:06:57.604 18:56:24 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.604 18:56:24 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:57.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.604 --rc genhtml_branch_coverage=1 00:06:57.604 --rc genhtml_function_coverage=1 00:06:57.604 --rc genhtml_legend=1 00:06:57.604 --rc geninfo_all_blocks=1 00:06:57.604 --rc geninfo_unexecuted_blocks=1 00:06:57.604 00:06:57.604 ' 00:06:57.604 18:56:24 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:57.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.605 --rc genhtml_branch_coverage=1 00:06:57.605 --rc genhtml_function_coverage=1 00:06:57.605 --rc genhtml_legend=1 00:06:57.605 --rc geninfo_all_blocks=1 00:06:57.605 --rc geninfo_unexecuted_blocks=1 00:06:57.605 00:06:57.605 ' 00:06:57.605 18:56:24 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:57.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.605 --rc genhtml_branch_coverage=1 00:06:57.605 --rc genhtml_function_coverage=1 00:06:57.605 --rc genhtml_legend=1 00:06:57.605 --rc geninfo_all_blocks=1 00:06:57.605 --rc geninfo_unexecuted_blocks=1 00:06:57.605 00:06:57.605 ' 00:06:57.605 18:56:24 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:57.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.605 --rc genhtml_branch_coverage=1 00:06:57.605 --rc genhtml_function_coverage=1 00:06:57.605 --rc genhtml_legend=1 00:06:57.605 --rc geninfo_all_blocks=1 00:06:57.605 --rc geninfo_unexecuted_blocks=1 00:06:57.605 00:06:57.605 ' 00:06:57.605 18:56:24 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.605 18:56:24 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:57.605 18:56:24 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.605 18:56:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.605 ************************************ 00:06:57.605 START TEST thread_poller_perf 00:06:57.605 ************************************ 00:06:57.605 18:56:24 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.605 [2024-11-26 18:56:24.072609] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:06:57.605 [2024-11-26 18:56:24.073048] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59940 ] 00:06:57.862 [2024-11-26 18:56:24.258079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.862 [2024-11-26 18:56:24.389210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.862 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:59.239 [2024-11-26T18:56:25.862Z] ====================================== 00:06:59.239 [2024-11-26T18:56:25.862Z] busy:2209823050 (cyc) 00:06:59.239 [2024-11-26T18:56:25.862Z] total_run_count: 301000 00:06:59.239 [2024-11-26T18:56:25.862Z] tsc_hz: 2200000000 (cyc) 00:06:59.239 [2024-11-26T18:56:25.862Z] ====================================== 00:06:59.239 [2024-11-26T18:56:25.862Z] poller_cost: 7341 (cyc), 3336 (nsec) 00:06:59.239 ************************************ 00:06:59.239 END TEST thread_poller_perf 00:06:59.239 ************************************ 00:06:59.239 00:06:59.239 real 0m1.616s 00:06:59.239 user 0m1.393s 00:06:59.239 sys 0m0.112s 00:06:59.239 18:56:25 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.239 18:56:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.239 18:56:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.239 18:56:25 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:59.239 18:56:25 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.239 18:56:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.239 ************************************ 00:06:59.239 START TEST thread_poller_perf 00:06:59.239 ************************************ 00:06:59.239 18:56:25 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.239 [2024-11-26 18:56:25.739342] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:06:59.239 [2024-11-26 18:56:25.739535] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59982 ] 00:06:59.497 [2024-11-26 18:56:25.933635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.497 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:59.497 [2024-11-26 18:56:26.063166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.875 [2024-11-26T18:56:27.498Z] ====================================== 00:07:00.875 [2024-11-26T18:56:27.498Z] busy:2204022687 (cyc) 00:07:00.875 [2024-11-26T18:56:27.498Z] total_run_count: 3815000 00:07:00.875 [2024-11-26T18:56:27.498Z] tsc_hz: 2200000000 (cyc) 00:07:00.875 [2024-11-26T18:56:27.498Z] ====================================== 00:07:00.875 [2024-11-26T18:56:27.498Z] poller_cost: 577 (cyc), 262 (nsec) 00:07:00.875 00:07:00.875 real 0m1.607s 00:07:00.875 user 0m1.388s 00:07:00.875 sys 0m0.109s 00:07:00.875 18:56:27 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.875 ************************************ 00:07:00.875 END TEST thread_poller_perf 00:07:00.875 ************************************ 00:07:00.875 18:56:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:00.875 18:56:27 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:00.875 00:07:00.875 real 0m3.521s 00:07:00.875 user 0m2.909s 00:07:00.875 sys 0m0.386s 00:07:00.875 ************************************ 00:07:00.875 END TEST thread 00:07:00.875 ************************************ 00:07:00.875 18:56:27 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.875 18:56:27 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.875 18:56:27 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:00.875 18:56:27 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:00.875 18:56:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.875 18:56:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.875 18:56:27 -- common/autotest_common.sh@10 -- # set +x 00:07:00.875 ************************************ 00:07:00.875 START TEST app_cmdline 00:07:00.875 ************************************ 00:07:00.875 18:56:27 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:00.875 * Looking for test storage... 00:07:00.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:00.875 18:56:27 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.875 18:56:27 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.875 18:56:27 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.133 18:56:27 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.133 18:56:27 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:01.133 18:56:27 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.133 18:56:27 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.133 --rc genhtml_branch_coverage=1 00:07:01.133 --rc genhtml_function_coverage=1 00:07:01.133 --rc genhtml_legend=1 00:07:01.133 --rc geninfo_all_blocks=1 00:07:01.133 --rc geninfo_unexecuted_blocks=1 00:07:01.133 00:07:01.133 ' 00:07:01.133 18:56:27 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.133 --rc genhtml_branch_coverage=1 00:07:01.133 --rc genhtml_function_coverage=1 00:07:01.133 --rc genhtml_legend=1 00:07:01.133 --rc geninfo_all_blocks=1 00:07:01.133 --rc geninfo_unexecuted_blocks=1 00:07:01.133 00:07:01.133 ' 00:07:01.133 18:56:27 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.133 --rc genhtml_branch_coverage=1 00:07:01.133 --rc genhtml_function_coverage=1 00:07:01.133 --rc genhtml_legend=1 00:07:01.133 --rc geninfo_all_blocks=1 00:07:01.133 --rc geninfo_unexecuted_blocks=1 00:07:01.133 00:07:01.133 ' 00:07:01.133 18:56:27 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.133 --rc genhtml_branch_coverage=1 00:07:01.133 --rc genhtml_function_coverage=1 00:07:01.133 --rc genhtml_legend=1 00:07:01.133 --rc geninfo_all_blocks=1 00:07:01.133 --rc geninfo_unexecuted_blocks=1 00:07:01.133 00:07:01.133 ' 00:07:01.133 18:56:27 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:01.133 18:56:27 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60071 00:07:01.133 18:56:27 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:01.133 18:56:27 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60071 00:07:01.133 18:56:27 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60071 ']' 00:07:01.133 18:56:27 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.133 18:56:27 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.133 18:56:27 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.133 18:56:27 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.133 18:56:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.133 [2024-11-26 18:56:27.708182] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:07:01.133 [2024-11-26 18:56:27.708394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60071 ] 00:07:01.391 [2024-11-26 18:56:27.894022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.650 [2024-11-26 18:56:28.026209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.584 18:56:28 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.584 18:56:28 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:02.584 18:56:28 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:02.841 { 00:07:02.841 "version": "SPDK v25.01-pre git sha1 971ec0126", 00:07:02.841 "fields": { 00:07:02.841 "major": 25, 00:07:02.841 "minor": 1, 00:07:02.841 "patch": 0, 00:07:02.841 "suffix": "-pre", 00:07:02.841 "commit": "971ec0126" 00:07:02.841 } 00:07:02.841 } 00:07:02.841 18:56:29 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:02.841 18:56:29 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:02.841 18:56:29 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:02.841 18:56:29 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:02.841 18:56:29 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:02.841 18:56:29 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:02.841 18:56:29 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:02.841 18:56:29 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.841 18:56:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.841 18:56:29 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.841 18:56:29 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:02.841 18:56:29 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:02.841 18:56:29 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.841 18:56:29 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:02.841 18:56:29 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.841 18:56:29 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.841 18:56:29 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.841 18:56:29 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.841 18:56:29 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.841 18:56:29 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.841 18:56:29 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.841 18:56:29 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.841 18:56:29 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:02.841 18:56:29 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.100 request: 00:07:03.100 { 00:07:03.100 "method": "env_dpdk_get_mem_stats", 00:07:03.100 "req_id": 1 00:07:03.100 } 00:07:03.100 Got JSON-RPC error response 00:07:03.100 response: 00:07:03.100 { 00:07:03.100 "code": -32601, 00:07:03.100 "message": "Method not found" 00:07:03.100 } 00:07:03.100 18:56:29 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:03.100 18:56:29 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:03.100 18:56:29 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:03.100 18:56:29 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:03.100 18:56:29 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60071 00:07:03.100 18:56:29 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60071 ']' 00:07:03.100 18:56:29 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60071 00:07:03.100 18:56:29 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:03.100 18:56:29 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.100 18:56:29 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60071 00:07:03.100 killing process with pid 60071 00:07:03.100 18:56:29 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.100 18:56:29 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.100 18:56:29 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60071' 00:07:03.100 18:56:29 app_cmdline -- common/autotest_common.sh@973 -- # kill 60071 00:07:03.100 18:56:29 app_cmdline -- common/autotest_common.sh@978 -- # wait 60071 00:07:05.647 ************************************ 00:07:05.647 END TEST app_cmdline 00:07:05.647 ************************************ 00:07:05.647 00:07:05.647 real 0m4.566s 00:07:05.647 user 0m5.050s 00:07:05.647 sys 0m0.704s 00:07:05.647 18:56:31 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.647 18:56:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:05.647 18:56:32 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:05.647 18:56:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.647 18:56:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.647 18:56:32 -- common/autotest_common.sh@10 -- # set +x 00:07:05.647 ************************************ 00:07:05.647 START TEST version 00:07:05.647 ************************************ 00:07:05.647 18:56:32 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:05.647 * Looking for test storage... 00:07:05.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:05.647 18:56:32 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:05.647 18:56:32 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:05.647 18:56:32 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:05.647 18:56:32 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:05.647 18:56:32 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.647 18:56:32 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.647 18:56:32 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.647 18:56:32 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.647 18:56:32 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.647 18:56:32 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.647 18:56:32 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.647 18:56:32 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.647 18:56:32 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.647 18:56:32 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.647 18:56:32 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.647 18:56:32 version -- scripts/common.sh@344 -- # case "$op" in 00:07:05.647 18:56:32 version -- scripts/common.sh@345 -- # : 1 00:07:05.647 18:56:32 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.647 18:56:32 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.647 18:56:32 version -- scripts/common.sh@365 -- # decimal 1 00:07:05.647 18:56:32 version -- scripts/common.sh@353 -- # local d=1 00:07:05.647 18:56:32 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.647 18:56:32 version -- scripts/common.sh@355 -- # echo 1 00:07:05.647 18:56:32 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.647 18:56:32 version -- scripts/common.sh@366 -- # decimal 2 00:07:05.647 18:56:32 version -- scripts/common.sh@353 -- # local d=2 00:07:05.647 18:56:32 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.647 18:56:32 version -- scripts/common.sh@355 -- # echo 2 00:07:05.647 18:56:32 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.648 18:56:32 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.648 18:56:32 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.648 18:56:32 version -- scripts/common.sh@368 -- # return 0 00:07:05.648 18:56:32 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.648 18:56:32 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:05.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.648 --rc genhtml_branch_coverage=1 00:07:05.648 --rc genhtml_function_coverage=1 00:07:05.648 --rc genhtml_legend=1 00:07:05.648 --rc geninfo_all_blocks=1 00:07:05.648 --rc geninfo_unexecuted_blocks=1 00:07:05.648 00:07:05.648 ' 00:07:05.648 18:56:32 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:05.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.648 --rc genhtml_branch_coverage=1 00:07:05.648 --rc genhtml_function_coverage=1 00:07:05.648 --rc genhtml_legend=1 00:07:05.648 --rc geninfo_all_blocks=1 00:07:05.648 --rc geninfo_unexecuted_blocks=1 00:07:05.648 00:07:05.648 ' 00:07:05.648 18:56:32 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:05.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.648 --rc genhtml_branch_coverage=1 00:07:05.648 --rc genhtml_function_coverage=1 00:07:05.648 --rc genhtml_legend=1 00:07:05.648 --rc geninfo_all_blocks=1 00:07:05.648 --rc geninfo_unexecuted_blocks=1 00:07:05.648 00:07:05.648 ' 00:07:05.648 18:56:32 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:05.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.648 --rc genhtml_branch_coverage=1 00:07:05.648 --rc genhtml_function_coverage=1 00:07:05.648 --rc genhtml_legend=1 00:07:05.648 --rc geninfo_all_blocks=1 00:07:05.648 --rc geninfo_unexecuted_blocks=1 00:07:05.648 00:07:05.648 ' 00:07:05.648 18:56:32 version -- app/version.sh@17 -- # get_header_version major 00:07:05.648 18:56:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.648 18:56:32 version -- app/version.sh@14 -- # cut -f2 00:07:05.648 18:56:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.648 18:56:32 version -- app/version.sh@17 -- # major=25 00:07:05.648 18:56:32 version -- app/version.sh@18 -- # get_header_version minor 00:07:05.648 18:56:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.648 18:56:32 version -- app/version.sh@14 -- # cut -f2 00:07:05.648 18:56:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.648 18:56:32 version -- app/version.sh@18 -- # minor=1 00:07:05.648 18:56:32 version -- app/version.sh@19 -- # get_header_version patch 00:07:05.648 18:56:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.648 18:56:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.648 18:56:32 version -- app/version.sh@14 -- # cut -f2 00:07:05.648 18:56:32 version -- app/version.sh@19 -- # patch=0 00:07:05.648 18:56:32 version -- app/version.sh@20 -- # get_header_version suffix 00:07:05.648 18:56:32 version -- app/version.sh@14 -- # cut -f2 00:07:05.648 18:56:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.648 18:56:32 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.648 18:56:32 version -- app/version.sh@20 -- # suffix=-pre 00:07:05.648 18:56:32 version -- app/version.sh@22 -- # version=25.1 00:07:05.648 18:56:32 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:05.648 18:56:32 version -- app/version.sh@28 -- # version=25.1rc0 00:07:05.648 18:56:32 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:05.648 18:56:32 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:05.907 18:56:32 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:05.907 18:56:32 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:05.907 00:07:05.907 real 0m0.262s 00:07:05.907 user 0m0.163s 00:07:05.907 sys 0m0.138s 00:07:05.907 ************************************ 00:07:05.907 END TEST version 00:07:05.907 ************************************ 00:07:05.907 18:56:32 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.907 18:56:32 version -- common/autotest_common.sh@10 -- # set +x 00:07:05.907 18:56:32 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:05.907 18:56:32 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:05.907 18:56:32 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:05.907 18:56:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.907 18:56:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.907 18:56:32 -- common/autotest_common.sh@10 -- # set +x 00:07:05.907 ************************************ 00:07:05.907 START TEST bdev_raid 00:07:05.907 ************************************ 00:07:05.907 18:56:32 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:05.907 * Looking for test storage... 00:07:05.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:05.907 18:56:32 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:05.907 18:56:32 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:07:05.907 18:56:32 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:05.907 18:56:32 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.907 18:56:32 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:06.167 18:56:32 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.167 18:56:32 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.167 18:56:32 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.167 18:56:32 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:06.167 18:56:32 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.167 18:56:32 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:06.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.167 --rc genhtml_branch_coverage=1 00:07:06.167 --rc genhtml_function_coverage=1 00:07:06.167 --rc genhtml_legend=1 00:07:06.167 --rc geninfo_all_blocks=1 00:07:06.167 --rc geninfo_unexecuted_blocks=1 00:07:06.167 00:07:06.167 ' 00:07:06.167 18:56:32 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:06.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.167 --rc genhtml_branch_coverage=1 00:07:06.167 --rc genhtml_function_coverage=1 00:07:06.167 --rc genhtml_legend=1 00:07:06.167 --rc geninfo_all_blocks=1 00:07:06.167 --rc geninfo_unexecuted_blocks=1 00:07:06.167 00:07:06.167 ' 00:07:06.167 18:56:32 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:06.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.167 --rc genhtml_branch_coverage=1 00:07:06.167 --rc genhtml_function_coverage=1 00:07:06.167 --rc genhtml_legend=1 00:07:06.167 --rc geninfo_all_blocks=1 00:07:06.167 --rc geninfo_unexecuted_blocks=1 00:07:06.167 00:07:06.167 ' 00:07:06.167 18:56:32 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:06.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.167 --rc genhtml_branch_coverage=1 00:07:06.167 --rc genhtml_function_coverage=1 00:07:06.167 --rc genhtml_legend=1 00:07:06.167 --rc geninfo_all_blocks=1 00:07:06.167 --rc geninfo_unexecuted_blocks=1 00:07:06.167 00:07:06.167 ' 00:07:06.167 18:56:32 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:06.167 18:56:32 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:06.167 18:56:32 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:06.167 18:56:32 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:06.167 18:56:32 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:06.167 18:56:32 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:06.167 18:56:32 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:06.167 18:56:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.167 18:56:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.167 18:56:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.167 ************************************ 00:07:06.167 START TEST raid1_resize_data_offset_test 00:07:06.167 ************************************ 00:07:06.167 18:56:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:07:06.167 18:56:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60258 00:07:06.167 18:56:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:06.167 18:56:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60258' 00:07:06.167 Process raid pid: 60258 00:07:06.167 18:56:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60258 00:07:06.167 18:56:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60258 ']' 00:07:06.167 18:56:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.167 18:56:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.167 18:56:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.167 18:56:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.167 18:56:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.167 [2024-11-26 18:56:32.663775] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:07:06.167 [2024-11-26 18:56:32.664265] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.426 [2024-11-26 18:56:32.861638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.426 [2024-11-26 18:56:33.023761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.685 [2024-11-26 18:56:33.233244] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.685 [2024-11-26 18:56:33.233340] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.277 18:56:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.277 18:56:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:07:07.277 18:56:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:07.277 18:56:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.277 18:56:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.277 malloc0 00:07:07.277 18:56:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.277 18:56:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:07.277 18:56:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.277 18:56:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.277 malloc1 00:07:07.277 18:56:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.277 18:56:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:07.277 18:56:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.277 18:56:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.277 null0 00:07:07.277 18:56:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.277 18:56:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:07.277 18:56:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.277 18:56:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.277 [2024-11-26 18:56:33.889135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:07.277 [2024-11-26 18:56:33.891787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:07.277 [2024-11-26 18:56:33.891861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:07.277 [2024-11-26 18:56:33.892072] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:07.277 [2024-11-26 18:56:33.892094] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:07.277 [2024-11-26 18:56:33.892450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:07.277 [2024-11-26 18:56:33.892676] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:07.277 [2024-11-26 18:56:33.892697] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:07.277 [2024-11-26 18:56:33.892897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.277 18:56:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.277 18:56:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.277 18:56:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.277 18:56:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.277 18:56:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:07.536 18:56:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.536 18:56:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:07.536 18:56:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:07.536 18:56:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.536 18:56:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.536 [2024-11-26 18:56:33.949378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:07.536 18:56:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.536 18:56:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:07.536 18:56:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.536 18:56:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.104 malloc2 00:07:08.104 18:56:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.104 18:56:34 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:08.104 18:56:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.104 18:56:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.104 [2024-11-26 18:56:34.494314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:08.104 [2024-11-26 18:56:34.511665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:08.104 18:56:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.104 [2024-11-26 18:56:34.514527] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:08.104 18:56:34 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.104 18:56:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.104 18:56:34 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:08.104 18:56:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.104 18:56:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.104 18:56:34 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:08.105 18:56:34 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60258 00:07:08.105 18:56:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60258 ']' 00:07:08.105 18:56:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60258 00:07:08.105 18:56:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:07:08.105 18:56:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.105 18:56:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60258 00:07:08.105 killing process with pid 60258 00:07:08.105 18:56:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.105 18:56:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.105 18:56:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60258' 00:07:08.105 18:56:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60258 00:07:08.105 18:56:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60258 00:07:08.105 [2024-11-26 18:56:34.600250] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:08.105 [2024-11-26 18:56:34.602439] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:08.105 [2024-11-26 18:56:34.602542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.105 [2024-11-26 18:56:34.602570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:08.105 [2024-11-26 18:56:34.635118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.105 [2024-11-26 18:56:34.635616] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:08.105 [2024-11-26 18:56:34.635645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:10.004 [2024-11-26 18:56:36.280914] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:11.002 ************************************ 00:07:11.002 END TEST raid1_resize_data_offset_test 00:07:11.002 ************************************ 00:07:11.002 18:56:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:11.002 00:07:11.002 real 0m4.828s 00:07:11.002 user 0m4.831s 00:07:11.002 sys 0m0.638s 00:07:11.002 18:56:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.002 18:56:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.002 18:56:37 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:11.002 18:56:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:11.002 18:56:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.002 18:56:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:11.002 ************************************ 00:07:11.002 START TEST raid0_resize_superblock_test 00:07:11.002 ************************************ 00:07:11.002 18:56:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:07:11.002 18:56:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:11.002 Process raid pid: 60342 00:07:11.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.002 18:56:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60342 00:07:11.002 18:56:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60342' 00:07:11.002 18:56:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60342 00:07:11.002 18:56:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60342 ']' 00:07:11.002 18:56:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:11.002 18:56:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.002 18:56:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.002 18:56:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.002 18:56:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.002 18:56:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.002 [2024-11-26 18:56:37.551028] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:07:11.002 [2024-11-26 18:56:37.551236] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.261 [2024-11-26 18:56:37.740948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.261 [2024-11-26 18:56:37.877248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.519 [2024-11-26 18:56:38.089189] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.519 [2024-11-26 18:56:38.089265] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.084 18:56:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.084 18:56:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:12.084 18:56:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:12.084 18:56:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.084 18:56:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.648 malloc0 00:07:12.648 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.648 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:12.648 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.648 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.648 [2024-11-26 18:56:39.055683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:12.648 [2024-11-26 18:56:39.056110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.648 [2024-11-26 18:56:39.056163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:12.648 [2024-11-26 18:56:39.056192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.648 [2024-11-26 18:56:39.059408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.648 [2024-11-26 18:56:39.059467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:12.648 pt0 00:07:12.648 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.648 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:12.648 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.648 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.648 7a20ae9d-2234-444e-b6c7-916a3d3b2aba 00:07:12.648 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.648 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:12.648 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.648 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.648 5a5def8e-2d87-4752-99ad-3a77de8e8364 00:07:12.648 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.648 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:12.648 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.648 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.648 2ec35423-a6a6-4cf0-80bd-1f76b1c255c7 00:07:12.649 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.649 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:12.649 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:12.649 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.649 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.649 [2024-11-26 18:56:39.212542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5a5def8e-2d87-4752-99ad-3a77de8e8364 is claimed 00:07:12.649 [2024-11-26 18:56:39.212746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2ec35423-a6a6-4cf0-80bd-1f76b1c255c7 is claimed 00:07:12.649 [2024-11-26 18:56:39.212976] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:12.649 [2024-11-26 18:56:39.213004] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:12.649 [2024-11-26 18:56:39.213468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:12.649 [2024-11-26 18:56:39.213754] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:12.649 [2024-11-26 18:56:39.213780] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:12.649 [2024-11-26 18:56:39.214021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.649 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.649 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:12.649 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:12.649 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.649 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.649 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:12.906 [2024-11-26 18:56:39.328905] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.906 [2024-11-26 18:56:39.392977] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:12.906 [2024-11-26 18:56:39.393409] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '5a5def8e-2d87-4752-99ad-3a77de8e8364' was resized: old size 131072, new size 204800 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.906 [2024-11-26 18:56:39.400784] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:12.906 [2024-11-26 18:56:39.400839] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2ec35423-a6a6-4cf0-80bd-1f76b1c255c7' was resized: old size 131072, new size 204800 00:07:12.906 [2024-11-26 18:56:39.400890] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:12.906 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:12.906 [2024-11-26 18:56:39.517104] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:13.164 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.164 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:13.164 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:13.164 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:13.164 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:13.164 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.164 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.164 [2024-11-26 18:56:39.564654] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:13.164 [2024-11-26 18:56:39.564793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:13.164 [2024-11-26 18:56:39.564821] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:13.164 [2024-11-26 18:56:39.564844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:13.164 [2024-11-26 18:56:39.565078] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:13.164 [2024-11-26 18:56:39.565144] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:13.164 [2024-11-26 18:56:39.565166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:13.164 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.164 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:13.164 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.164 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.164 [2024-11-26 18:56:39.572477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:13.164 [2024-11-26 18:56:39.572573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:13.164 [2024-11-26 18:56:39.572607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:13.164 [2024-11-26 18:56:39.572626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:13.164 [2024-11-26 18:56:39.575781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:13.165 [2024-11-26 18:56:39.575851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:13.165 pt0 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.165 [2024-11-26 18:56:39.579152] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 5a5def8e-2d87-4752-99ad-3a77de8e8364 00:07:13.165 [2024-11-26 18:56:39.579648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5a5def8e-2d87-4752-99ad-3a77de8e8364 is claimed 00:07:13.165 [2024-11-26 18:56:39.579818] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2ec35423-a6a6-4cf0-80bd-1f76b1c255c7 00:07:13.165 [2024-11-26 18:56:39.579857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2ec35423-a6a6-4cf0-80bd-1f76b1c255c7 is claimed 00:07:13.165 [2024-11-26 18:56:39.580044] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 2ec35423-a6a6-4cf0-80bd-1f76b1c255c7 (2) smaller than existing raid bdev Raid (3) 00:07:13.165 [2024-11-26 18:56:39.580089] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 5a5def8e-2d87-4752-99ad-3a77de8e8364: File exists 00:07:13.165 [2024-11-26 18:56:39.580152] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:13.165 [2024-11-26 18:56:39.580173] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:13.165 [2024-11-26 18:56:39.580569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:13.165 [2024-11-26 18:56:39.580795] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:13.165 [2024-11-26 18:56:39.580811] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:13.165 [2024-11-26 18:56:39.581222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.165 [2024-11-26 18:56:39.593404] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60342 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60342 ']' 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60342 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60342 00:07:13.165 killing process with pid 60342 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60342' 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60342 00:07:13.165 18:56:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60342 00:07:13.165 [2024-11-26 18:56:39.683900] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:13.165 [2024-11-26 18:56:39.684049] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:13.165 [2024-11-26 18:56:39.684126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:13.165 [2024-11-26 18:56:39.684142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:14.546 [2024-11-26 18:56:41.034593] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:15.922 ************************************ 00:07:15.922 END TEST raid0_resize_superblock_test 00:07:15.922 ************************************ 00:07:15.922 18:56:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:15.922 00:07:15.922 real 0m4.712s 00:07:15.922 user 0m4.937s 00:07:15.922 sys 0m0.702s 00:07:15.922 18:56:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.922 18:56:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.922 18:56:42 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:15.922 18:56:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:15.922 18:56:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.922 18:56:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:15.922 ************************************ 00:07:15.922 START TEST raid1_resize_superblock_test 00:07:15.922 ************************************ 00:07:15.922 18:56:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:15.922 18:56:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:15.922 18:56:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60445 00:07:15.922 Process raid pid: 60445 00:07:15.922 18:56:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60445' 00:07:15.922 18:56:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:15.922 18:56:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60445 00:07:15.922 18:56:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60445 ']' 00:07:15.922 18:56:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.922 18:56:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.922 18:56:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.922 18:56:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.922 18:56:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.922 [2024-11-26 18:56:42.299323] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:07:15.922 [2024-11-26 18:56:42.299514] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.922 [2024-11-26 18:56:42.481272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.180 [2024-11-26 18:56:42.633991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.439 [2024-11-26 18:56:42.868783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.439 [2024-11-26 18:56:42.869126] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.007 18:56:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.007 18:56:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:17.007 18:56:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:17.007 18:56:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.007 18:56:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.574 malloc0 00:07:17.574 18:56:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.574 18:56:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:17.574 18:56:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.574 18:56:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.574 [2024-11-26 18:56:43.944079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:17.574 [2024-11-26 18:56:43.944204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.574 [2024-11-26 18:56:43.944251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:17.574 [2024-11-26 18:56:43.944276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.574 [2024-11-26 18:56:43.947743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.574 [2024-11-26 18:56:43.947806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:17.574 pt0 00:07:17.574 18:56:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.574 18:56:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:17.574 18:56:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.574 18:56:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.574 ef164bc9-9d33-4482-86b7-615c23ffe49b 00:07:17.575 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.575 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:17.575 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.575 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.575 962568d5-36c1-41cd-8094-e1d6482c0c09 00:07:17.575 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.575 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:17.575 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.575 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.575 73617d91-cdb8-40f3-9322-06cf941c2c11 00:07:17.575 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.575 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:17.575 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:17.575 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.575 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.575 [2024-11-26 18:56:44.141086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 962568d5-36c1-41cd-8094-e1d6482c0c09 is claimed 00:07:17.575 [2024-11-26 18:56:44.141233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 73617d91-cdb8-40f3-9322-06cf941c2c11 is claimed 00:07:17.575 [2024-11-26 18:56:44.141573] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:17.575 [2024-11-26 18:56:44.141629] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:17.575 [2024-11-26 18:56:44.142072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:17.575 [2024-11-26 18:56:44.142438] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:17.575 [2024-11-26 18:56:44.142459] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:17.575 [2024-11-26 18:56:44.142693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.575 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.575 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:17.575 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:17.575 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.575 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.575 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.833 [2024-11-26 18:56:44.261557] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.833 [2024-11-26 18:56:44.305570] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:17.833 [2024-11-26 18:56:44.305766] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '962568d5-36c1-41cd-8094-e1d6482c0c09' was resized: old size 131072, new size 204800 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.833 [2024-11-26 18:56:44.313455] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:17.833 [2024-11-26 18:56:44.313507] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '73617d91-cdb8-40f3-9322-06cf941c2c11' was resized: old size 131072, new size 204800 00:07:17.833 [2024-11-26 18:56:44.313559] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:17.833 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.834 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.834 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:17.834 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:17.834 [2024-11-26 18:56:44.425581] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.834 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.092 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:18.092 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:18.092 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:18.092 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:18.092 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.092 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.092 [2024-11-26 18:56:44.481315] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:18.092 [2024-11-26 18:56:44.481638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:18.092 [2024-11-26 18:56:44.481699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:18.092 [2024-11-26 18:56:44.481954] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:18.092 [2024-11-26 18:56:44.482311] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:18.092 [2024-11-26 18:56:44.482431] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:18.092 [2024-11-26 18:56:44.482462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:18.092 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.092 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:18.092 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.092 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.092 [2024-11-26 18:56:44.489117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:18.092 [2024-11-26 18:56:44.489455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.092 [2024-11-26 18:56:44.489503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:18.092 [2024-11-26 18:56:44.489531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.092 [2024-11-26 18:56:44.492806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.092 [2024-11-26 18:56:44.492866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:18.092 pt0 00:07:18.092 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.092 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:18.092 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.092 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.092 [2024-11-26 18:56:44.495553] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 962568d5-36c1-41cd-8094-e1d6482c0c09 00:07:18.092 [2024-11-26 18:56:44.495802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 962568d5-36c1-41cd-8094-e1d6482c0c09 is claimed 00:07:18.092 [2024-11-26 18:56:44.495976] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 73617d91-cdb8-40f3-9322-06cf941c2c11 00:07:18.092 [2024-11-26 18:56:44.496019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 73617d91-cdb8-40f3-9322-06cf941c2c11 is claimed 00:07:18.092 [2024-11-26 18:56:44.496204] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 73617d91-cdb8-40f3-9322-06cf941c2c11 (2) smaller than existing raid bdev Raid (3) 00:07:18.092 [2024-11-26 18:56:44.496251] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 962568d5-36c1-41cd-8094-e1d6482c0c09: File exists 00:07:18.092 [2024-11-26 18:56:44.496342] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:18.093 [2024-11-26 18:56:44.496370] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:18.093 [2024-11-26 18:56:44.496724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:18.093 [2024-11-26 18:56:44.496982] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:18.093 [2024-11-26 18:56:44.497001] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:18.093 [2024-11-26 18:56:44.497476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.093 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.093 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:18.093 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:18.093 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:18.093 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:18.093 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.093 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.093 [2024-11-26 18:56:44.509676] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:18.093 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.093 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:18.093 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:18.093 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:18.093 18:56:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60445 00:07:18.093 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60445 ']' 00:07:18.093 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60445 00:07:18.093 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:18.093 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.093 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60445 00:07:18.093 killing process with pid 60445 00:07:18.093 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.093 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.093 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60445' 00:07:18.093 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60445 00:07:18.093 [2024-11-26 18:56:44.601501] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:18.093 18:56:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60445 00:07:18.093 [2024-11-26 18:56:44.601636] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:18.093 [2024-11-26 18:56:44.601742] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:18.093 [2024-11-26 18:56:44.601761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:19.469 [2024-11-26 18:56:46.030918] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:20.844 ************************************ 00:07:20.844 END TEST raid1_resize_superblock_test 00:07:20.844 ************************************ 00:07:20.844 18:56:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:20.844 00:07:20.844 real 0m5.027s 00:07:20.844 user 0m5.241s 00:07:20.844 sys 0m0.773s 00:07:20.844 18:56:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.844 18:56:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.844 18:56:47 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:20.844 18:56:47 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:20.844 18:56:47 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:20.844 18:56:47 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:20.845 18:56:47 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:20.845 18:56:47 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:20.845 18:56:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:20.845 18:56:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.845 18:56:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:20.845 ************************************ 00:07:20.845 START TEST raid_function_test_raid0 00:07:20.845 ************************************ 00:07:20.845 18:56:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:20.845 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:20.845 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:20.845 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:20.845 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60549 00:07:20.845 Process raid pid: 60549 00:07:20.845 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60549' 00:07:20.845 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60549 00:07:20.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.845 18:56:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:20.845 18:56:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60549 ']' 00:07:20.845 18:56:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.845 18:56:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.845 18:56:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.845 18:56:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.845 18:56:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:20.845 [2024-11-26 18:56:47.435721] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:07:20.845 [2024-11-26 18:56:47.435970] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.102 [2024-11-26 18:56:47.623142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.360 [2024-11-26 18:56:47.777308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.618 [2024-11-26 18:56:48.014331] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.618 [2024-11-26 18:56:48.014631] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.876 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.876 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:21.876 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:21.876 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.876 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:22.134 Base_1 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:22.134 Base_2 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:22.134 [2024-11-26 18:56:48.557431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:22.134 [2024-11-26 18:56:48.560162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:22.134 [2024-11-26 18:56:48.560321] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:22.134 [2024-11-26 18:56:48.560344] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:22.134 [2024-11-26 18:56:48.560777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:22.134 [2024-11-26 18:56:48.561021] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:22.134 [2024-11-26 18:56:48.561037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:22.134 [2024-11-26 18:56:48.561400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:22.134 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:22.393 [2024-11-26 18:56:48.881612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:22.393 /dev/nbd0 00:07:22.393 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:22.393 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:22.393 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:22.393 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:22.393 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:22.393 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:22.393 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:22.393 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:22.393 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:22.393 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:22.393 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:22.393 1+0 records in 00:07:22.393 1+0 records out 00:07:22.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454083 s, 9.0 MB/s 00:07:22.393 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.393 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:22.393 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.393 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:22.393 18:56:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:22.393 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.393 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:22.393 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:22.393 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:22.393 18:56:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:22.651 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:22.651 { 00:07:22.651 "nbd_device": "/dev/nbd0", 00:07:22.651 "bdev_name": "raid" 00:07:22.651 } 00:07:22.651 ]' 00:07:22.651 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:22.651 { 00:07:22.651 "nbd_device": "/dev/nbd0", 00:07:22.651 "bdev_name": "raid" 00:07:22.651 } 00:07:22.651 ]' 00:07:22.651 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.908 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:22.908 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.908 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:22.908 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:22.908 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:22.908 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:22.908 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:22.908 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:22.908 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:22.908 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:22.908 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:22.908 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:22.909 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:22.909 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:22.909 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:22.909 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:22.909 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:22.909 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:22.909 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:22.909 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:22.909 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:22.909 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:22.909 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:22.909 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:22.909 4096+0 records in 00:07:22.909 4096+0 records out 00:07:22.909 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0365751 s, 57.3 MB/s 00:07:22.909 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:23.166 4096+0 records in 00:07:23.166 4096+0 records out 00:07:23.166 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.390695 s, 5.4 MB/s 00:07:23.166 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:23.166 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:23.166 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:23.166 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:23.166 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:23.166 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:23.166 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:23.166 128+0 records in 00:07:23.166 128+0 records out 00:07:23.166 65536 bytes (66 kB, 64 KiB) copied, 0.000889142 s, 73.7 MB/s 00:07:23.166 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:23.166 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:23.425 2035+0 records in 00:07:23.425 2035+0 records out 00:07:23.425 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00913566 s, 114 MB/s 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:23.425 456+0 records in 00:07:23.425 456+0 records out 00:07:23.425 233472 bytes (233 kB, 228 KiB) copied, 0.00261791 s, 89.2 MB/s 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.425 18:56:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:23.683 [2024-11-26 18:56:50.129377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.683 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:23.683 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:23.683 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:23.683 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.683 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.684 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:23.684 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:23.684 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.684 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:23.684 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:23.684 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:23.941 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:23.941 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:23.941 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.941 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:23.941 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:23.941 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.941 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:23.941 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:23.941 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:23.941 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:23.941 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:23.941 18:56:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60549 00:07:23.941 18:56:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60549 ']' 00:07:23.942 18:56:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60549 00:07:23.942 18:56:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:23.942 18:56:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.942 18:56:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60549 00:07:23.942 killing process with pid 60549 00:07:23.942 18:56:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.942 18:56:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.942 18:56:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60549' 00:07:23.942 18:56:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60549 00:07:23.942 [2024-11-26 18:56:50.559536] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:23.942 18:56:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60549 00:07:23.942 [2024-11-26 18:56:50.559679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.942 [2024-11-26 18:56:50.559754] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:23.942 [2024-11-26 18:56:50.559779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:24.200 [2024-11-26 18:56:50.774566] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.573 ************************************ 00:07:25.573 END TEST raid_function_test_raid0 00:07:25.573 ************************************ 00:07:25.573 18:56:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:25.573 00:07:25.573 real 0m4.636s 00:07:25.573 user 0m5.607s 00:07:25.573 sys 0m1.121s 00:07:25.573 18:56:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.573 18:56:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:25.573 18:56:51 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:25.573 18:56:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:25.573 18:56:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.573 18:56:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:25.573 ************************************ 00:07:25.573 START TEST raid_function_test_concat 00:07:25.573 ************************************ 00:07:25.574 18:56:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:25.574 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:25.574 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:25.574 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:25.574 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60689 00:07:25.574 Process raid pid: 60689 00:07:25.574 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60689' 00:07:25.574 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60689 00:07:25.574 18:56:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:25.574 18:56:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60689 ']' 00:07:25.574 18:56:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.574 18:56:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.574 18:56:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.574 18:56:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.574 18:56:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:25.574 [2024-11-26 18:56:52.079153] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:07:25.574 [2024-11-26 18:56:52.079346] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.833 [2024-11-26 18:56:52.260357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.833 [2024-11-26 18:56:52.408922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.099 [2024-11-26 18:56:52.637819] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.099 [2024-11-26 18:56:52.637886] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.665 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.665 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:26.665 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:26.665 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.665 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:26.665 Base_1 00:07:26.665 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.665 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:26.665 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.665 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:26.665 Base_2 00:07:26.665 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.665 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:26.665 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.665 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:26.665 [2024-11-26 18:56:53.177017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:26.665 [2024-11-26 18:56:53.179759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:26.665 [2024-11-26 18:56:53.180035] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:26.665 [2024-11-26 18:56:53.180064] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:26.665 [2024-11-26 18:56:53.180474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:26.665 [2024-11-26 18:56:53.180730] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:26.665 [2024-11-26 18:56:53.180747] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:26.665 [2024-11-26 18:56:53.181028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.665 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.665 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:26.665 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:26.665 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.665 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:26.665 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.666 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:26.666 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:26.666 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:26.666 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:26.666 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:26.666 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:26.666 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:26.666 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:26.666 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:26.666 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:26.666 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:26.666 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:27.231 [2024-11-26 18:56:53.553216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:27.231 /dev/nbd0 00:07:27.231 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:27.231 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:27.231 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:27.231 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:27.231 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:27.231 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:27.231 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:27.231 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:27.231 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:27.231 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:27.231 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:27.231 1+0 records in 00:07:27.231 1+0 records out 00:07:27.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429915 s, 9.5 MB/s 00:07:27.231 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.231 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:27.231 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.231 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:27.231 18:56:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:27.231 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:27.231 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:27.231 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:27.231 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:27.231 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:27.489 { 00:07:27.489 "nbd_device": "/dev/nbd0", 00:07:27.489 "bdev_name": "raid" 00:07:27.489 } 00:07:27.489 ]' 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:27.489 { 00:07:27.489 "nbd_device": "/dev/nbd0", 00:07:27.489 "bdev_name": "raid" 00:07:27.489 } 00:07:27.489 ]' 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:27.489 18:56:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:27.489 4096+0 records in 00:07:27.489 4096+0 records out 00:07:27.489 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0278415 s, 75.3 MB/s 00:07:27.489 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:28.055 4096+0 records in 00:07:28.055 4096+0 records out 00:07:28.055 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.350322 s, 6.0 MB/s 00:07:28.055 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:28.055 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:28.055 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:28.055 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:28.055 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:28.055 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:28.055 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:28.055 128+0 records in 00:07:28.055 128+0 records out 00:07:28.055 65536 bytes (66 kB, 64 KiB) copied, 0.000585521 s, 112 MB/s 00:07:28.055 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:28.055 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:28.055 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:28.055 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:28.055 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:28.055 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:28.055 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:28.055 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:28.055 2035+0 records in 00:07:28.055 2035+0 records out 00:07:28.055 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00995406 s, 105 MB/s 00:07:28.055 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:28.055 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:28.056 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:28.056 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:28.056 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:28.056 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:28.056 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:28.056 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:28.056 456+0 records in 00:07:28.056 456+0 records out 00:07:28.056 233472 bytes (233 kB, 228 KiB) copied, 0.00279763 s, 83.5 MB/s 00:07:28.056 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:28.056 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:28.056 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:28.056 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:28.056 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:28.056 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:28.056 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:28.056 18:56:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:28.056 18:56:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:28.056 18:56:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:28.056 18:56:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:28.056 18:56:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.056 18:56:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:28.314 [2024-11-26 18:56:54.822366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.314 18:56:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:28.314 18:56:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:28.314 18:56:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:28.314 18:56:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.314 18:56:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.314 18:56:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:28.314 18:56:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:28.314 18:56:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.314 18:56:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:28.314 18:56:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:28.314 18:56:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:28.573 18:56:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:28.573 18:56:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:28.573 18:56:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:28.573 18:56:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:28.573 18:56:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:28.573 18:56:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:28.573 18:56:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:28.573 18:56:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:28.573 18:56:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:28.573 18:56:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:28.573 18:56:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:28.573 18:56:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60689 00:07:28.573 18:56:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60689 ']' 00:07:28.573 18:56:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60689 00:07:28.573 18:56:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:28.573 18:56:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.573 18:56:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60689 00:07:28.866 18:56:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.866 killing process with pid 60689 00:07:28.866 18:56:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.866 18:56:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60689' 00:07:28.866 18:56:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60689 00:07:28.866 18:56:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60689 00:07:28.866 [2024-11-26 18:56:55.222438] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.866 [2024-11-26 18:56:55.222587] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.866 [2024-11-26 18:56:55.222669] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:28.866 [2024-11-26 18:56:55.222701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:28.866 [2024-11-26 18:56:55.427670] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.246 18:56:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:30.246 00:07:30.246 real 0m4.623s 00:07:30.246 user 0m5.560s 00:07:30.246 sys 0m1.168s 00:07:30.246 18:56:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.246 ************************************ 00:07:30.246 END TEST raid_function_test_concat 00:07:30.246 18:56:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:30.246 ************************************ 00:07:30.246 18:56:56 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:30.246 18:56:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.246 18:56:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.246 18:56:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:30.246 ************************************ 00:07:30.246 START TEST raid0_resize_test 00:07:30.246 ************************************ 00:07:30.246 18:56:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:30.246 18:56:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:30.246 18:56:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:30.246 18:56:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:30.246 18:56:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:30.246 18:56:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:30.246 18:56:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:30.246 18:56:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:30.246 18:56:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:30.246 18:56:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60818 00:07:30.246 Process raid pid: 60818 00:07:30.246 18:56:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:30.246 18:56:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60818' 00:07:30.246 18:56:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60818 00:07:30.246 18:56:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60818 ']' 00:07:30.246 18:56:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.246 18:56:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.246 18:56:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.246 18:56:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.246 18:56:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.246 [2024-11-26 18:56:56.771541] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:07:30.246 [2024-11-26 18:56:56.771793] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.504 [2024-11-26 18:56:56.966645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.763 [2024-11-26 18:56:57.133707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.763 [2024-11-26 18:56:57.377389] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.763 [2024-11-26 18:56:57.377474] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.330 Base_1 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.330 Base_2 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.330 [2024-11-26 18:56:57.863253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:31.330 [2024-11-26 18:56:57.865966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:31.330 [2024-11-26 18:56:57.866088] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:31.330 [2024-11-26 18:56:57.866110] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:31.330 [2024-11-26 18:56:57.866532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:31.330 [2024-11-26 18:56:57.866733] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:31.330 [2024-11-26 18:56:57.866757] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:31.330 [2024-11-26 18:56:57.866998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.330 [2024-11-26 18:56:57.871206] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:31.330 [2024-11-26 18:56:57.871255] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:31.330 true 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.330 [2024-11-26 18:56:57.883516] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.330 [2024-11-26 18:56:57.931251] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:31.330 [2024-11-26 18:56:57.931328] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:31.330 [2024-11-26 18:56:57.931379] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:31.330 true 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.330 18:56:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:31.330 [2024-11-26 18:56:57.943473] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.593 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.593 18:56:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:31.593 18:56:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:31.593 18:56:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:31.593 18:56:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:31.593 18:56:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:31.593 18:56:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60818 00:07:31.593 18:56:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60818 ']' 00:07:31.593 18:56:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60818 00:07:31.593 18:56:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:31.593 18:56:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.593 18:56:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60818 00:07:31.593 18:56:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.593 killing process with pid 60818 00:07:31.593 18:56:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.593 18:56:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60818' 00:07:31.593 18:56:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60818 00:07:31.593 18:56:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60818 00:07:31.593 [2024-11-26 18:56:58.033572] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:31.593 [2024-11-26 18:56:58.033720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:31.593 [2024-11-26 18:56:58.033815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:31.593 [2024-11-26 18:56:58.033833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:31.593 [2024-11-26 18:56:58.051870] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.966 18:56:59 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:32.966 00:07:32.966 real 0m2.571s 00:07:32.966 user 0m2.841s 00:07:32.966 sys 0m0.462s 00:07:32.966 18:56:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.966 18:56:59 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.966 ************************************ 00:07:32.966 END TEST raid0_resize_test 00:07:32.966 ************************************ 00:07:32.966 18:56:59 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:32.966 18:56:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:32.966 18:56:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.966 18:56:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.966 ************************************ 00:07:32.966 START TEST raid1_resize_test 00:07:32.966 ************************************ 00:07:32.966 18:56:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:32.966 18:56:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:32.966 18:56:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:32.966 18:56:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:32.966 18:56:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:32.966 18:56:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:32.966 18:56:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:32.966 18:56:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:32.966 18:56:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:32.966 Process raid pid: 60880 00:07:32.966 18:56:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60880 00:07:32.966 18:56:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:32.966 18:56:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60880' 00:07:32.966 18:56:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60880 00:07:32.966 18:56:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60880 ']' 00:07:32.966 18:56:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.966 18:56:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.967 18:56:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.967 18:56:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.967 18:56:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.967 [2024-11-26 18:56:59.417102] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:07:32.967 [2024-11-26 18:56:59.417371] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.225 [2024-11-26 18:56:59.625171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.225 [2024-11-26 18:56:59.798937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.484 [2024-11-26 18:57:00.037108] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.484 [2024-11-26 18:57:00.037199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.048 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.048 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:34.048 18:57:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:34.048 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.048 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.048 Base_1 00:07:34.048 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.048 18:57:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:34.048 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.048 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.048 Base_2 00:07:34.048 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.048 18:57:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:34.048 18:57:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:34.048 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.048 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.048 [2024-11-26 18:57:00.658526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:34.048 [2024-11-26 18:57:00.661385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:34.048 [2024-11-26 18:57:00.661523] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:34.048 [2024-11-26 18:57:00.661546] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:34.048 [2024-11-26 18:57:00.662045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:34.048 [2024-11-26 18:57:00.662349] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:34.048 [2024-11-26 18:57:00.662376] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:34.048 [2024-11-26 18:57:00.662778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.048 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.048 18:57:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:34.048 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.048 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.048 [2024-11-26 18:57:00.666683] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:34.048 [2024-11-26 18:57:00.666738] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:34.306 true 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.306 [2024-11-26 18:57:00.678898] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.306 [2024-11-26 18:57:00.722670] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:34.306 [2024-11-26 18:57:00.722720] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:34.306 [2024-11-26 18:57:00.722764] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:34.306 true 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.306 [2024-11-26 18:57:00.734899] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60880 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60880 ']' 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60880 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60880 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.306 killing process with pid 60880 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60880' 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60880 00:07:34.306 18:57:00 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60880 00:07:34.306 [2024-11-26 18:57:00.813002] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.306 [2024-11-26 18:57:00.813179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.306 [2024-11-26 18:57:00.813980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:34.306 [2024-11-26 18:57:00.814020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:34.306 [2024-11-26 18:57:00.830614] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.679 18:57:02 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:35.679 00:07:35.679 real 0m2.782s 00:07:35.679 user 0m3.133s 00:07:35.679 sys 0m0.523s 00:07:35.679 18:57:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.679 18:57:02 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.679 ************************************ 00:07:35.679 END TEST raid1_resize_test 00:07:35.679 ************************************ 00:07:35.679 18:57:02 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:35.679 18:57:02 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:35.679 18:57:02 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:35.679 18:57:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:35.679 18:57:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.679 18:57:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.679 ************************************ 00:07:35.679 START TEST raid_state_function_test 00:07:35.679 ************************************ 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60942 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60942' 00:07:35.679 Process raid pid: 60942 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60942 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60942 ']' 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.679 18:57:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.679 [2024-11-26 18:57:02.279278] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:07:35.679 [2024-11-26 18:57:02.279555] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.937 [2024-11-26 18:57:02.476993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.194 [2024-11-26 18:57:02.641041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.451 [2024-11-26 18:57:02.885474] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.451 [2024-11-26 18:57:02.885584] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.709 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.709 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:36.709 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.709 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.709 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.709 [2024-11-26 18:57:03.251210] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.709 [2024-11-26 18:57:03.251309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.709 [2024-11-26 18:57:03.251329] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.709 [2024-11-26 18:57:03.251347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.709 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.709 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:36.709 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.709 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.709 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.709 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.709 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.709 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.709 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.709 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.709 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.709 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.709 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.709 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.709 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.709 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.967 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.967 "name": "Existed_Raid", 00:07:36.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.967 "strip_size_kb": 64, 00:07:36.967 "state": "configuring", 00:07:36.967 "raid_level": "raid0", 00:07:36.967 "superblock": false, 00:07:36.967 "num_base_bdevs": 2, 00:07:36.967 "num_base_bdevs_discovered": 0, 00:07:36.967 "num_base_bdevs_operational": 2, 00:07:36.967 "base_bdevs_list": [ 00:07:36.967 { 00:07:36.967 "name": "BaseBdev1", 00:07:36.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.967 "is_configured": false, 00:07:36.967 "data_offset": 0, 00:07:36.967 "data_size": 0 00:07:36.967 }, 00:07:36.967 { 00:07:36.967 "name": "BaseBdev2", 00:07:36.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.967 "is_configured": false, 00:07:36.967 "data_offset": 0, 00:07:36.967 "data_size": 0 00:07:36.967 } 00:07:36.967 ] 00:07:36.967 }' 00:07:36.967 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.967 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.531 [2024-11-26 18:57:03.879320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:37.531 [2024-11-26 18:57:03.879391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.531 [2024-11-26 18:57:03.887321] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:37.531 [2024-11-26 18:57:03.887414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:37.531 [2024-11-26 18:57:03.887432] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.531 [2024-11-26 18:57:03.887454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.531 [2024-11-26 18:57:03.938206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.531 BaseBdev1 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.531 [ 00:07:37.531 { 00:07:37.531 "name": "BaseBdev1", 00:07:37.531 "aliases": [ 00:07:37.531 "e3cf97ba-c3d2-4213-bc6d-9451c80c328a" 00:07:37.531 ], 00:07:37.531 "product_name": "Malloc disk", 00:07:37.531 "block_size": 512, 00:07:37.531 "num_blocks": 65536, 00:07:37.531 "uuid": "e3cf97ba-c3d2-4213-bc6d-9451c80c328a", 00:07:37.531 "assigned_rate_limits": { 00:07:37.531 "rw_ios_per_sec": 0, 00:07:37.531 "rw_mbytes_per_sec": 0, 00:07:37.531 "r_mbytes_per_sec": 0, 00:07:37.531 "w_mbytes_per_sec": 0 00:07:37.531 }, 00:07:37.531 "claimed": true, 00:07:37.531 "claim_type": "exclusive_write", 00:07:37.531 "zoned": false, 00:07:37.531 "supported_io_types": { 00:07:37.531 "read": true, 00:07:37.531 "write": true, 00:07:37.531 "unmap": true, 00:07:37.531 "flush": true, 00:07:37.531 "reset": true, 00:07:37.531 "nvme_admin": false, 00:07:37.531 "nvme_io": false, 00:07:37.531 "nvme_io_md": false, 00:07:37.531 "write_zeroes": true, 00:07:37.531 "zcopy": true, 00:07:37.531 "get_zone_info": false, 00:07:37.531 "zone_management": false, 00:07:37.531 "zone_append": false, 00:07:37.531 "compare": false, 00:07:37.531 "compare_and_write": false, 00:07:37.531 "abort": true, 00:07:37.531 "seek_hole": false, 00:07:37.531 "seek_data": false, 00:07:37.531 "copy": true, 00:07:37.531 "nvme_iov_md": false 00:07:37.531 }, 00:07:37.531 "memory_domains": [ 00:07:37.531 { 00:07:37.531 "dma_device_id": "system", 00:07:37.531 "dma_device_type": 1 00:07:37.531 }, 00:07:37.531 { 00:07:37.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.531 "dma_device_type": 2 00:07:37.531 } 00:07:37.531 ], 00:07:37.531 "driver_specific": {} 00:07:37.531 } 00:07:37.531 ] 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.531 18:57:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.532 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.532 18:57:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.532 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.532 "name": "Existed_Raid", 00:07:37.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.532 "strip_size_kb": 64, 00:07:37.532 "state": "configuring", 00:07:37.532 "raid_level": "raid0", 00:07:37.532 "superblock": false, 00:07:37.532 "num_base_bdevs": 2, 00:07:37.532 "num_base_bdevs_discovered": 1, 00:07:37.532 "num_base_bdevs_operational": 2, 00:07:37.532 "base_bdevs_list": [ 00:07:37.532 { 00:07:37.532 "name": "BaseBdev1", 00:07:37.532 "uuid": "e3cf97ba-c3d2-4213-bc6d-9451c80c328a", 00:07:37.532 "is_configured": true, 00:07:37.532 "data_offset": 0, 00:07:37.532 "data_size": 65536 00:07:37.532 }, 00:07:37.532 { 00:07:37.532 "name": "BaseBdev2", 00:07:37.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.532 "is_configured": false, 00:07:37.532 "data_offset": 0, 00:07:37.532 "data_size": 0 00:07:37.532 } 00:07:37.532 ] 00:07:37.532 }' 00:07:37.532 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.532 18:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.096 [2024-11-26 18:57:04.462450] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:38.096 [2024-11-26 18:57:04.462759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.096 [2024-11-26 18:57:04.470564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:38.096 [2024-11-26 18:57:04.473312] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:38.096 [2024-11-26 18:57:04.473551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.096 "name": "Existed_Raid", 00:07:38.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.096 "strip_size_kb": 64, 00:07:38.096 "state": "configuring", 00:07:38.096 "raid_level": "raid0", 00:07:38.096 "superblock": false, 00:07:38.096 "num_base_bdevs": 2, 00:07:38.096 "num_base_bdevs_discovered": 1, 00:07:38.096 "num_base_bdevs_operational": 2, 00:07:38.096 "base_bdevs_list": [ 00:07:38.096 { 00:07:38.096 "name": "BaseBdev1", 00:07:38.096 "uuid": "e3cf97ba-c3d2-4213-bc6d-9451c80c328a", 00:07:38.096 "is_configured": true, 00:07:38.096 "data_offset": 0, 00:07:38.096 "data_size": 65536 00:07:38.096 }, 00:07:38.096 { 00:07:38.096 "name": "BaseBdev2", 00:07:38.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.096 "is_configured": false, 00:07:38.096 "data_offset": 0, 00:07:38.096 "data_size": 0 00:07:38.096 } 00:07:38.096 ] 00:07:38.096 }' 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.096 18:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.663 18:57:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:38.663 18:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.663 18:57:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.663 [2024-11-26 18:57:05.021995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:38.663 [2024-11-26 18:57:05.022077] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:38.663 [2024-11-26 18:57:05.022094] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:38.663 [2024-11-26 18:57:05.022489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:38.663 [2024-11-26 18:57:05.022746] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:38.663 [2024-11-26 18:57:05.022770] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:38.663 [2024-11-26 18:57:05.023124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.663 BaseBdev2 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.663 [ 00:07:38.663 { 00:07:38.663 "name": "BaseBdev2", 00:07:38.663 "aliases": [ 00:07:38.663 "9c992834-167e-451b-858e-f83dbb0d5a66" 00:07:38.663 ], 00:07:38.663 "product_name": "Malloc disk", 00:07:38.663 "block_size": 512, 00:07:38.663 "num_blocks": 65536, 00:07:38.663 "uuid": "9c992834-167e-451b-858e-f83dbb0d5a66", 00:07:38.663 "assigned_rate_limits": { 00:07:38.663 "rw_ios_per_sec": 0, 00:07:38.663 "rw_mbytes_per_sec": 0, 00:07:38.663 "r_mbytes_per_sec": 0, 00:07:38.663 "w_mbytes_per_sec": 0 00:07:38.663 }, 00:07:38.663 "claimed": true, 00:07:38.663 "claim_type": "exclusive_write", 00:07:38.663 "zoned": false, 00:07:38.663 "supported_io_types": { 00:07:38.663 "read": true, 00:07:38.663 "write": true, 00:07:38.663 "unmap": true, 00:07:38.663 "flush": true, 00:07:38.663 "reset": true, 00:07:38.663 "nvme_admin": false, 00:07:38.663 "nvme_io": false, 00:07:38.663 "nvme_io_md": false, 00:07:38.663 "write_zeroes": true, 00:07:38.663 "zcopy": true, 00:07:38.663 "get_zone_info": false, 00:07:38.663 "zone_management": false, 00:07:38.663 "zone_append": false, 00:07:38.663 "compare": false, 00:07:38.663 "compare_and_write": false, 00:07:38.663 "abort": true, 00:07:38.663 "seek_hole": false, 00:07:38.663 "seek_data": false, 00:07:38.663 "copy": true, 00:07:38.663 "nvme_iov_md": false 00:07:38.663 }, 00:07:38.663 "memory_domains": [ 00:07:38.663 { 00:07:38.663 "dma_device_id": "system", 00:07:38.663 "dma_device_type": 1 00:07:38.663 }, 00:07:38.663 { 00:07:38.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.663 "dma_device_type": 2 00:07:38.663 } 00:07:38.663 ], 00:07:38.663 "driver_specific": {} 00:07:38.663 } 00:07:38.663 ] 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.663 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.663 "name": "Existed_Raid", 00:07:38.663 "uuid": "419a799d-b573-41b2-b322-61aeb610145d", 00:07:38.663 "strip_size_kb": 64, 00:07:38.663 "state": "online", 00:07:38.663 "raid_level": "raid0", 00:07:38.664 "superblock": false, 00:07:38.664 "num_base_bdevs": 2, 00:07:38.664 "num_base_bdevs_discovered": 2, 00:07:38.664 "num_base_bdevs_operational": 2, 00:07:38.664 "base_bdevs_list": [ 00:07:38.664 { 00:07:38.664 "name": "BaseBdev1", 00:07:38.664 "uuid": "e3cf97ba-c3d2-4213-bc6d-9451c80c328a", 00:07:38.664 "is_configured": true, 00:07:38.664 "data_offset": 0, 00:07:38.664 "data_size": 65536 00:07:38.664 }, 00:07:38.664 { 00:07:38.664 "name": "BaseBdev2", 00:07:38.664 "uuid": "9c992834-167e-451b-858e-f83dbb0d5a66", 00:07:38.664 "is_configured": true, 00:07:38.664 "data_offset": 0, 00:07:38.664 "data_size": 65536 00:07:38.664 } 00:07:38.664 ] 00:07:38.664 }' 00:07:38.664 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.664 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.230 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:39.230 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:39.230 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:39.230 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:39.230 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:39.230 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:39.230 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:39.230 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:39.230 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.231 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.231 [2024-11-26 18:57:05.574628] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:39.231 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.231 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:39.231 "name": "Existed_Raid", 00:07:39.231 "aliases": [ 00:07:39.231 "419a799d-b573-41b2-b322-61aeb610145d" 00:07:39.231 ], 00:07:39.231 "product_name": "Raid Volume", 00:07:39.231 "block_size": 512, 00:07:39.231 "num_blocks": 131072, 00:07:39.231 "uuid": "419a799d-b573-41b2-b322-61aeb610145d", 00:07:39.231 "assigned_rate_limits": { 00:07:39.231 "rw_ios_per_sec": 0, 00:07:39.231 "rw_mbytes_per_sec": 0, 00:07:39.231 "r_mbytes_per_sec": 0, 00:07:39.231 "w_mbytes_per_sec": 0 00:07:39.231 }, 00:07:39.231 "claimed": false, 00:07:39.231 "zoned": false, 00:07:39.231 "supported_io_types": { 00:07:39.231 "read": true, 00:07:39.231 "write": true, 00:07:39.231 "unmap": true, 00:07:39.231 "flush": true, 00:07:39.231 "reset": true, 00:07:39.231 "nvme_admin": false, 00:07:39.231 "nvme_io": false, 00:07:39.231 "nvme_io_md": false, 00:07:39.231 "write_zeroes": true, 00:07:39.231 "zcopy": false, 00:07:39.231 "get_zone_info": false, 00:07:39.231 "zone_management": false, 00:07:39.231 "zone_append": false, 00:07:39.231 "compare": false, 00:07:39.231 "compare_and_write": false, 00:07:39.231 "abort": false, 00:07:39.231 "seek_hole": false, 00:07:39.231 "seek_data": false, 00:07:39.231 "copy": false, 00:07:39.231 "nvme_iov_md": false 00:07:39.231 }, 00:07:39.231 "memory_domains": [ 00:07:39.231 { 00:07:39.231 "dma_device_id": "system", 00:07:39.231 "dma_device_type": 1 00:07:39.231 }, 00:07:39.231 { 00:07:39.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.231 "dma_device_type": 2 00:07:39.231 }, 00:07:39.231 { 00:07:39.231 "dma_device_id": "system", 00:07:39.231 "dma_device_type": 1 00:07:39.231 }, 00:07:39.231 { 00:07:39.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.231 "dma_device_type": 2 00:07:39.231 } 00:07:39.231 ], 00:07:39.231 "driver_specific": { 00:07:39.231 "raid": { 00:07:39.231 "uuid": "419a799d-b573-41b2-b322-61aeb610145d", 00:07:39.231 "strip_size_kb": 64, 00:07:39.231 "state": "online", 00:07:39.231 "raid_level": "raid0", 00:07:39.231 "superblock": false, 00:07:39.231 "num_base_bdevs": 2, 00:07:39.231 "num_base_bdevs_discovered": 2, 00:07:39.231 "num_base_bdevs_operational": 2, 00:07:39.231 "base_bdevs_list": [ 00:07:39.231 { 00:07:39.231 "name": "BaseBdev1", 00:07:39.231 "uuid": "e3cf97ba-c3d2-4213-bc6d-9451c80c328a", 00:07:39.231 "is_configured": true, 00:07:39.231 "data_offset": 0, 00:07:39.231 "data_size": 65536 00:07:39.231 }, 00:07:39.231 { 00:07:39.231 "name": "BaseBdev2", 00:07:39.231 "uuid": "9c992834-167e-451b-858e-f83dbb0d5a66", 00:07:39.231 "is_configured": true, 00:07:39.231 "data_offset": 0, 00:07:39.231 "data_size": 65536 00:07:39.231 } 00:07:39.231 ] 00:07:39.231 } 00:07:39.231 } 00:07:39.231 }' 00:07:39.231 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:39.231 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:39.231 BaseBdev2' 00:07:39.231 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.231 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:39.231 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.231 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:39.231 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.231 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.231 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.231 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.231 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.231 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.231 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.231 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:39.231 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.231 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.231 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.231 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.489 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.489 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.489 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:39.489 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.489 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.489 [2024-11-26 18:57:05.858469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:39.489 [2024-11-26 18:57:05.858552] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:39.489 [2024-11-26 18:57:05.858655] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.489 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.489 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:39.490 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:39.490 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:39.490 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:39.490 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:39.490 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:39.490 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.490 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:39.490 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.490 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.490 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:39.490 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.490 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.490 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.490 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.490 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.490 18:57:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.490 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.490 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.490 18:57:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.490 18:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.490 "name": "Existed_Raid", 00:07:39.490 "uuid": "419a799d-b573-41b2-b322-61aeb610145d", 00:07:39.490 "strip_size_kb": 64, 00:07:39.490 "state": "offline", 00:07:39.490 "raid_level": "raid0", 00:07:39.490 "superblock": false, 00:07:39.490 "num_base_bdevs": 2, 00:07:39.490 "num_base_bdevs_discovered": 1, 00:07:39.490 "num_base_bdevs_operational": 1, 00:07:39.490 "base_bdevs_list": [ 00:07:39.490 { 00:07:39.490 "name": null, 00:07:39.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.490 "is_configured": false, 00:07:39.490 "data_offset": 0, 00:07:39.490 "data_size": 65536 00:07:39.490 }, 00:07:39.490 { 00:07:39.490 "name": "BaseBdev2", 00:07:39.490 "uuid": "9c992834-167e-451b-858e-f83dbb0d5a66", 00:07:39.490 "is_configured": true, 00:07:39.490 "data_offset": 0, 00:07:39.490 "data_size": 65536 00:07:39.490 } 00:07:39.490 ] 00:07:39.490 }' 00:07:39.490 18:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.490 18:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.057 18:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:40.057 18:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:40.057 18:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.057 18:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:40.057 18:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.057 18:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.057 18:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.057 18:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:40.057 18:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:40.057 18:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:40.057 18:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.057 18:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.057 [2024-11-26 18:57:06.573740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:40.057 [2024-11-26 18:57:06.573847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:40.057 18:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.057 18:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:40.057 18:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:40.316 18:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.316 18:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.316 18:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:40.316 18:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.316 18:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.316 18:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:40.316 18:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:40.316 18:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:40.316 18:57:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60942 00:07:40.316 18:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60942 ']' 00:07:40.316 18:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60942 00:07:40.316 18:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:40.316 18:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.316 18:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60942 00:07:40.316 18:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.316 killing process with pid 60942 00:07:40.316 18:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.316 18:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60942' 00:07:40.316 18:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60942 00:07:40.316 18:57:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60942 00:07:40.316 [2024-11-26 18:57:06.765617] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:40.316 [2024-11-26 18:57:06.782306] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:41.306 18:57:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:41.306 00:07:41.306 real 0m5.813s 00:07:41.306 user 0m8.593s 00:07:41.306 sys 0m0.951s 00:07:41.306 18:57:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.306 18:57:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.306 ************************************ 00:07:41.306 END TEST raid_state_function_test 00:07:41.306 ************************************ 00:07:41.564 18:57:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:41.564 18:57:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:41.564 18:57:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.564 18:57:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:41.564 ************************************ 00:07:41.564 START TEST raid_state_function_test_sb 00:07:41.564 ************************************ 00:07:41.564 18:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:41.564 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:41.564 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:41.564 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:41.564 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:41.565 Process raid pid: 61201 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61201 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61201' 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61201 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61201 ']' 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.565 18:57:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.565 [2024-11-26 18:57:08.075407] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:07:41.565 [2024-11-26 18:57:08.075754] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.823 [2024-11-26 18:57:08.254122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.823 [2024-11-26 18:57:08.404938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.082 [2024-11-26 18:57:08.635562] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.082 [2024-11-26 18:57:08.635623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.657 [2024-11-26 18:57:09.091697] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:42.657 [2024-11-26 18:57:09.091764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:42.657 [2024-11-26 18:57:09.091782] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:42.657 [2024-11-26 18:57:09.091798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.657 "name": "Existed_Raid", 00:07:42.657 "uuid": "88aee8ff-f378-4d4f-aa40-503a913b02d1", 00:07:42.657 "strip_size_kb": 64, 00:07:42.657 "state": "configuring", 00:07:42.657 "raid_level": "raid0", 00:07:42.657 "superblock": true, 00:07:42.657 "num_base_bdevs": 2, 00:07:42.657 "num_base_bdevs_discovered": 0, 00:07:42.657 "num_base_bdevs_operational": 2, 00:07:42.657 "base_bdevs_list": [ 00:07:42.657 { 00:07:42.657 "name": "BaseBdev1", 00:07:42.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.657 "is_configured": false, 00:07:42.657 "data_offset": 0, 00:07:42.657 "data_size": 0 00:07:42.657 }, 00:07:42.657 { 00:07:42.657 "name": "BaseBdev2", 00:07:42.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.657 "is_configured": false, 00:07:42.657 "data_offset": 0, 00:07:42.657 "data_size": 0 00:07:42.657 } 00:07:42.657 ] 00:07:42.657 }' 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.657 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.224 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:43.224 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.224 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.224 [2024-11-26 18:57:09.583760] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:43.224 [2024-11-26 18:57:09.583811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:43.224 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.224 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:43.224 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.224 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.224 [2024-11-26 18:57:09.591730] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:43.224 [2024-11-26 18:57:09.591793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:43.224 [2024-11-26 18:57:09.591809] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:43.224 [2024-11-26 18:57:09.591830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:43.224 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.224 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:43.224 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.224 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.224 [2024-11-26 18:57:09.641181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:43.224 BaseBdev1 00:07:43.224 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.224 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:43.224 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:43.224 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:43.224 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:43.224 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:43.224 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:43.224 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.225 [ 00:07:43.225 { 00:07:43.225 "name": "BaseBdev1", 00:07:43.225 "aliases": [ 00:07:43.225 "d85946c2-f31e-4288-96a3-c9ef84a04f6f" 00:07:43.225 ], 00:07:43.225 "product_name": "Malloc disk", 00:07:43.225 "block_size": 512, 00:07:43.225 "num_blocks": 65536, 00:07:43.225 "uuid": "d85946c2-f31e-4288-96a3-c9ef84a04f6f", 00:07:43.225 "assigned_rate_limits": { 00:07:43.225 "rw_ios_per_sec": 0, 00:07:43.225 "rw_mbytes_per_sec": 0, 00:07:43.225 "r_mbytes_per_sec": 0, 00:07:43.225 "w_mbytes_per_sec": 0 00:07:43.225 }, 00:07:43.225 "claimed": true, 00:07:43.225 "claim_type": "exclusive_write", 00:07:43.225 "zoned": false, 00:07:43.225 "supported_io_types": { 00:07:43.225 "read": true, 00:07:43.225 "write": true, 00:07:43.225 "unmap": true, 00:07:43.225 "flush": true, 00:07:43.225 "reset": true, 00:07:43.225 "nvme_admin": false, 00:07:43.225 "nvme_io": false, 00:07:43.225 "nvme_io_md": false, 00:07:43.225 "write_zeroes": true, 00:07:43.225 "zcopy": true, 00:07:43.225 "get_zone_info": false, 00:07:43.225 "zone_management": false, 00:07:43.225 "zone_append": false, 00:07:43.225 "compare": false, 00:07:43.225 "compare_and_write": false, 00:07:43.225 "abort": true, 00:07:43.225 "seek_hole": false, 00:07:43.225 "seek_data": false, 00:07:43.225 "copy": true, 00:07:43.225 "nvme_iov_md": false 00:07:43.225 }, 00:07:43.225 "memory_domains": [ 00:07:43.225 { 00:07:43.225 "dma_device_id": "system", 00:07:43.225 "dma_device_type": 1 00:07:43.225 }, 00:07:43.225 { 00:07:43.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.225 "dma_device_type": 2 00:07:43.225 } 00:07:43.225 ], 00:07:43.225 "driver_specific": {} 00:07:43.225 } 00:07:43.225 ] 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.225 "name": "Existed_Raid", 00:07:43.225 "uuid": "27e9378d-c1af-49a1-99c0-51ed315cc744", 00:07:43.225 "strip_size_kb": 64, 00:07:43.225 "state": "configuring", 00:07:43.225 "raid_level": "raid0", 00:07:43.225 "superblock": true, 00:07:43.225 "num_base_bdevs": 2, 00:07:43.225 "num_base_bdevs_discovered": 1, 00:07:43.225 "num_base_bdevs_operational": 2, 00:07:43.225 "base_bdevs_list": [ 00:07:43.225 { 00:07:43.225 "name": "BaseBdev1", 00:07:43.225 "uuid": "d85946c2-f31e-4288-96a3-c9ef84a04f6f", 00:07:43.225 "is_configured": true, 00:07:43.225 "data_offset": 2048, 00:07:43.225 "data_size": 63488 00:07:43.225 }, 00:07:43.225 { 00:07:43.225 "name": "BaseBdev2", 00:07:43.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.225 "is_configured": false, 00:07:43.225 "data_offset": 0, 00:07:43.225 "data_size": 0 00:07:43.225 } 00:07:43.225 ] 00:07:43.225 }' 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.225 18:57:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.791 [2024-11-26 18:57:10.189382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:43.791 [2024-11-26 18:57:10.189461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.791 [2024-11-26 18:57:10.197424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:43.791 [2024-11-26 18:57:10.200334] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:43.791 [2024-11-26 18:57:10.200445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.791 "name": "Existed_Raid", 00:07:43.791 "uuid": "4a816f5c-7b97-4253-a63f-7aed6e0dac39", 00:07:43.791 "strip_size_kb": 64, 00:07:43.791 "state": "configuring", 00:07:43.791 "raid_level": "raid0", 00:07:43.791 "superblock": true, 00:07:43.791 "num_base_bdevs": 2, 00:07:43.791 "num_base_bdevs_discovered": 1, 00:07:43.791 "num_base_bdevs_operational": 2, 00:07:43.791 "base_bdevs_list": [ 00:07:43.791 { 00:07:43.791 "name": "BaseBdev1", 00:07:43.791 "uuid": "d85946c2-f31e-4288-96a3-c9ef84a04f6f", 00:07:43.791 "is_configured": true, 00:07:43.791 "data_offset": 2048, 00:07:43.791 "data_size": 63488 00:07:43.791 }, 00:07:43.791 { 00:07:43.791 "name": "BaseBdev2", 00:07:43.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.791 "is_configured": false, 00:07:43.791 "data_offset": 0, 00:07:43.791 "data_size": 0 00:07:43.791 } 00:07:43.791 ] 00:07:43.791 }' 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.791 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.050 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:44.050 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.051 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.309 [2024-11-26 18:57:10.705424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:44.309 [2024-11-26 18:57:10.705785] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:44.309 [2024-11-26 18:57:10.705813] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:44.309 [2024-11-26 18:57:10.706161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:44.309 BaseBdev2 00:07:44.309 [2024-11-26 18:57:10.706409] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:44.309 [2024-11-26 18:57:10.706437] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:44.309 [2024-11-26 18:57:10.706615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.309 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.309 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:44.309 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:44.309 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:44.309 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:44.309 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:44.309 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:44.309 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:44.309 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.309 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.309 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.309 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:44.309 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.309 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.309 [ 00:07:44.309 { 00:07:44.309 "name": "BaseBdev2", 00:07:44.309 "aliases": [ 00:07:44.309 "4c8fee77-d22f-4cdb-98d8-e2b6c704a664" 00:07:44.309 ], 00:07:44.309 "product_name": "Malloc disk", 00:07:44.309 "block_size": 512, 00:07:44.309 "num_blocks": 65536, 00:07:44.309 "uuid": "4c8fee77-d22f-4cdb-98d8-e2b6c704a664", 00:07:44.309 "assigned_rate_limits": { 00:07:44.309 "rw_ios_per_sec": 0, 00:07:44.309 "rw_mbytes_per_sec": 0, 00:07:44.309 "r_mbytes_per_sec": 0, 00:07:44.309 "w_mbytes_per_sec": 0 00:07:44.309 }, 00:07:44.309 "claimed": true, 00:07:44.309 "claim_type": "exclusive_write", 00:07:44.309 "zoned": false, 00:07:44.309 "supported_io_types": { 00:07:44.309 "read": true, 00:07:44.309 "write": true, 00:07:44.309 "unmap": true, 00:07:44.309 "flush": true, 00:07:44.309 "reset": true, 00:07:44.309 "nvme_admin": false, 00:07:44.309 "nvme_io": false, 00:07:44.309 "nvme_io_md": false, 00:07:44.309 "write_zeroes": true, 00:07:44.309 "zcopy": true, 00:07:44.309 "get_zone_info": false, 00:07:44.309 "zone_management": false, 00:07:44.309 "zone_append": false, 00:07:44.309 "compare": false, 00:07:44.309 "compare_and_write": false, 00:07:44.310 "abort": true, 00:07:44.310 "seek_hole": false, 00:07:44.310 "seek_data": false, 00:07:44.310 "copy": true, 00:07:44.310 "nvme_iov_md": false 00:07:44.310 }, 00:07:44.310 "memory_domains": [ 00:07:44.310 { 00:07:44.310 "dma_device_id": "system", 00:07:44.310 "dma_device_type": 1 00:07:44.310 }, 00:07:44.310 { 00:07:44.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.310 "dma_device_type": 2 00:07:44.310 } 00:07:44.310 ], 00:07:44.310 "driver_specific": {} 00:07:44.310 } 00:07:44.310 ] 00:07:44.310 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.310 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:44.310 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:44.310 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:44.310 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:44.310 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.310 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.310 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:44.310 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.310 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.310 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.310 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.310 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.310 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.310 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.310 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.310 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.310 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.310 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.310 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.310 "name": "Existed_Raid", 00:07:44.310 "uuid": "4a816f5c-7b97-4253-a63f-7aed6e0dac39", 00:07:44.310 "strip_size_kb": 64, 00:07:44.310 "state": "online", 00:07:44.310 "raid_level": "raid0", 00:07:44.310 "superblock": true, 00:07:44.310 "num_base_bdevs": 2, 00:07:44.310 "num_base_bdevs_discovered": 2, 00:07:44.310 "num_base_bdevs_operational": 2, 00:07:44.310 "base_bdevs_list": [ 00:07:44.310 { 00:07:44.310 "name": "BaseBdev1", 00:07:44.310 "uuid": "d85946c2-f31e-4288-96a3-c9ef84a04f6f", 00:07:44.310 "is_configured": true, 00:07:44.310 "data_offset": 2048, 00:07:44.310 "data_size": 63488 00:07:44.310 }, 00:07:44.310 { 00:07:44.310 "name": "BaseBdev2", 00:07:44.310 "uuid": "4c8fee77-d22f-4cdb-98d8-e2b6c704a664", 00:07:44.310 "is_configured": true, 00:07:44.310 "data_offset": 2048, 00:07:44.310 "data_size": 63488 00:07:44.310 } 00:07:44.310 ] 00:07:44.310 }' 00:07:44.310 18:57:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.310 18:57:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.877 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:44.877 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:44.877 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:44.877 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:44.877 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:44.877 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:44.877 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:44.877 18:57:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.877 18:57:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.877 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:44.877 [2024-11-26 18:57:11.258402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.877 18:57:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.877 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:44.877 "name": "Existed_Raid", 00:07:44.877 "aliases": [ 00:07:44.877 "4a816f5c-7b97-4253-a63f-7aed6e0dac39" 00:07:44.877 ], 00:07:44.877 "product_name": "Raid Volume", 00:07:44.877 "block_size": 512, 00:07:44.877 "num_blocks": 126976, 00:07:44.877 "uuid": "4a816f5c-7b97-4253-a63f-7aed6e0dac39", 00:07:44.877 "assigned_rate_limits": { 00:07:44.877 "rw_ios_per_sec": 0, 00:07:44.877 "rw_mbytes_per_sec": 0, 00:07:44.877 "r_mbytes_per_sec": 0, 00:07:44.877 "w_mbytes_per_sec": 0 00:07:44.877 }, 00:07:44.877 "claimed": false, 00:07:44.877 "zoned": false, 00:07:44.877 "supported_io_types": { 00:07:44.877 "read": true, 00:07:44.877 "write": true, 00:07:44.877 "unmap": true, 00:07:44.877 "flush": true, 00:07:44.877 "reset": true, 00:07:44.877 "nvme_admin": false, 00:07:44.877 "nvme_io": false, 00:07:44.877 "nvme_io_md": false, 00:07:44.877 "write_zeroes": true, 00:07:44.877 "zcopy": false, 00:07:44.877 "get_zone_info": false, 00:07:44.877 "zone_management": false, 00:07:44.877 "zone_append": false, 00:07:44.877 "compare": false, 00:07:44.877 "compare_and_write": false, 00:07:44.877 "abort": false, 00:07:44.877 "seek_hole": false, 00:07:44.877 "seek_data": false, 00:07:44.877 "copy": false, 00:07:44.877 "nvme_iov_md": false 00:07:44.877 }, 00:07:44.877 "memory_domains": [ 00:07:44.877 { 00:07:44.877 "dma_device_id": "system", 00:07:44.877 "dma_device_type": 1 00:07:44.877 }, 00:07:44.877 { 00:07:44.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.877 "dma_device_type": 2 00:07:44.877 }, 00:07:44.877 { 00:07:44.877 "dma_device_id": "system", 00:07:44.877 "dma_device_type": 1 00:07:44.877 }, 00:07:44.877 { 00:07:44.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.877 "dma_device_type": 2 00:07:44.877 } 00:07:44.877 ], 00:07:44.877 "driver_specific": { 00:07:44.877 "raid": { 00:07:44.877 "uuid": "4a816f5c-7b97-4253-a63f-7aed6e0dac39", 00:07:44.877 "strip_size_kb": 64, 00:07:44.877 "state": "online", 00:07:44.877 "raid_level": "raid0", 00:07:44.877 "superblock": true, 00:07:44.877 "num_base_bdevs": 2, 00:07:44.877 "num_base_bdevs_discovered": 2, 00:07:44.877 "num_base_bdevs_operational": 2, 00:07:44.877 "base_bdevs_list": [ 00:07:44.877 { 00:07:44.877 "name": "BaseBdev1", 00:07:44.877 "uuid": "d85946c2-f31e-4288-96a3-c9ef84a04f6f", 00:07:44.877 "is_configured": true, 00:07:44.877 "data_offset": 2048, 00:07:44.877 "data_size": 63488 00:07:44.877 }, 00:07:44.877 { 00:07:44.877 "name": "BaseBdev2", 00:07:44.877 "uuid": "4c8fee77-d22f-4cdb-98d8-e2b6c704a664", 00:07:44.878 "is_configured": true, 00:07:44.878 "data_offset": 2048, 00:07:44.878 "data_size": 63488 00:07:44.878 } 00:07:44.878 ] 00:07:44.878 } 00:07:44.878 } 00:07:44.878 }' 00:07:44.878 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:44.878 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:44.878 BaseBdev2' 00:07:44.878 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.878 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:44.878 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.878 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:44.878 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.878 18:57:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.878 18:57:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.878 18:57:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.136 [2024-11-26 18:57:11.553923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:45.136 [2024-11-26 18:57:11.553989] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:45.136 [2024-11-26 18:57:11.554086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.136 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.136 "name": "Existed_Raid", 00:07:45.136 "uuid": "4a816f5c-7b97-4253-a63f-7aed6e0dac39", 00:07:45.136 "strip_size_kb": 64, 00:07:45.136 "state": "offline", 00:07:45.136 "raid_level": "raid0", 00:07:45.136 "superblock": true, 00:07:45.136 "num_base_bdevs": 2, 00:07:45.136 "num_base_bdevs_discovered": 1, 00:07:45.136 "num_base_bdevs_operational": 1, 00:07:45.136 "base_bdevs_list": [ 00:07:45.136 { 00:07:45.136 "name": null, 00:07:45.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.136 "is_configured": false, 00:07:45.136 "data_offset": 0, 00:07:45.136 "data_size": 63488 00:07:45.136 }, 00:07:45.136 { 00:07:45.136 "name": "BaseBdev2", 00:07:45.136 "uuid": "4c8fee77-d22f-4cdb-98d8-e2b6c704a664", 00:07:45.136 "is_configured": true, 00:07:45.136 "data_offset": 2048, 00:07:45.136 "data_size": 63488 00:07:45.136 } 00:07:45.137 ] 00:07:45.137 }' 00:07:45.137 18:57:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.137 18:57:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.703 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:45.703 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:45.703 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.703 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:45.703 18:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.703 18:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.703 18:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.703 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:45.703 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:45.703 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:45.703 18:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.703 18:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.703 [2024-11-26 18:57:12.243879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:45.703 [2024-11-26 18:57:12.244790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:45.962 18:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.962 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:45.962 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:45.962 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.962 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:45.962 18:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.962 18:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.962 18:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.962 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:45.962 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:45.962 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:45.962 18:57:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61201 00:07:45.962 18:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61201 ']' 00:07:45.962 18:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61201 00:07:45.962 18:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:45.962 18:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.962 18:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61201 00:07:45.962 killing process with pid 61201 00:07:45.962 18:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.962 18:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.962 18:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61201' 00:07:45.962 18:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61201 00:07:45.962 [2024-11-26 18:57:12.435540] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:45.962 18:57:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61201 00:07:45.962 [2024-11-26 18:57:12.451045] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:47.336 18:57:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:47.336 00:07:47.336 real 0m5.722s 00:07:47.336 user 0m8.436s 00:07:47.336 sys 0m0.863s 00:07:47.336 18:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.336 ************************************ 00:07:47.336 END TEST raid_state_function_test_sb 00:07:47.336 ************************************ 00:07:47.336 18:57:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.336 18:57:13 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:47.336 18:57:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:47.336 18:57:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.336 18:57:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:47.336 ************************************ 00:07:47.336 START TEST raid_superblock_test 00:07:47.336 ************************************ 00:07:47.336 18:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:47.336 18:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:47.336 18:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:47.336 18:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:47.336 18:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:47.336 18:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:47.336 18:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:47.336 18:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:47.336 18:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:47.336 18:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:47.336 18:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:47.336 18:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:47.336 18:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:47.336 18:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:47.336 18:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:47.336 18:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:47.336 18:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:47.336 18:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61464 00:07:47.337 18:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:47.337 18:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61464 00:07:47.337 18:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61464 ']' 00:07:47.337 18:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.337 18:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.337 18:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.337 18:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.337 18:57:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.337 [2024-11-26 18:57:13.843684] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:07:47.337 [2024-11-26 18:57:13.844117] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61464 ] 00:07:47.594 [2024-11-26 18:57:14.029942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.595 [2024-11-26 18:57:14.206328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.852 [2024-11-26 18:57:14.442939] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.852 [2024-11-26 18:57:14.443154] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.419 malloc1 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.419 [2024-11-26 18:57:14.896241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:48.419 [2024-11-26 18:57:14.896479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.419 [2024-11-26 18:57:14.896635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:48.419 [2024-11-26 18:57:14.896765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.419 [2024-11-26 18:57:14.899994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.419 [2024-11-26 18:57:14.900171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:48.419 pt1 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.419 malloc2 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.419 18:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.419 [2024-11-26 18:57:14.958238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:48.419 [2024-11-26 18:57:14.958347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.419 [2024-11-26 18:57:14.958395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:48.419 [2024-11-26 18:57:14.958411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.420 [2024-11-26 18:57:14.961469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.420 [2024-11-26 18:57:14.961518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:48.420 pt2 00:07:48.420 18:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.420 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:48.420 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:48.420 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:48.420 18:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.420 18:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.420 [2024-11-26 18:57:14.970436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:48.420 [2024-11-26 18:57:14.973155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:48.420 [2024-11-26 18:57:14.973422] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:48.420 [2024-11-26 18:57:14.973443] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:48.420 [2024-11-26 18:57:14.973817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:48.420 [2024-11-26 18:57:14.974170] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:48.420 [2024-11-26 18:57:14.974201] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:48.420 [2024-11-26 18:57:14.974490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.420 18:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.420 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:48.420 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.420 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.420 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:48.420 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.420 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.420 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.420 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.420 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.420 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.420 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.420 18:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.420 18:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.420 18:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.420 18:57:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.420 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.420 "name": "raid_bdev1", 00:07:48.420 "uuid": "430edf67-5dd6-4f31-bcaf-6d0ccb18d5d9", 00:07:48.420 "strip_size_kb": 64, 00:07:48.420 "state": "online", 00:07:48.420 "raid_level": "raid0", 00:07:48.420 "superblock": true, 00:07:48.420 "num_base_bdevs": 2, 00:07:48.420 "num_base_bdevs_discovered": 2, 00:07:48.420 "num_base_bdevs_operational": 2, 00:07:48.420 "base_bdevs_list": [ 00:07:48.420 { 00:07:48.420 "name": "pt1", 00:07:48.420 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:48.420 "is_configured": true, 00:07:48.420 "data_offset": 2048, 00:07:48.420 "data_size": 63488 00:07:48.420 }, 00:07:48.420 { 00:07:48.420 "name": "pt2", 00:07:48.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:48.420 "is_configured": true, 00:07:48.420 "data_offset": 2048, 00:07:48.420 "data_size": 63488 00:07:48.420 } 00:07:48.420 ] 00:07:48.420 }' 00:07:48.420 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.420 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.986 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:48.986 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:48.986 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:48.986 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:48.986 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:48.986 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:48.986 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:48.986 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:48.986 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.986 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.986 [2024-11-26 18:57:15.510978] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.986 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.986 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:48.986 "name": "raid_bdev1", 00:07:48.986 "aliases": [ 00:07:48.986 "430edf67-5dd6-4f31-bcaf-6d0ccb18d5d9" 00:07:48.986 ], 00:07:48.986 "product_name": "Raid Volume", 00:07:48.986 "block_size": 512, 00:07:48.986 "num_blocks": 126976, 00:07:48.986 "uuid": "430edf67-5dd6-4f31-bcaf-6d0ccb18d5d9", 00:07:48.986 "assigned_rate_limits": { 00:07:48.986 "rw_ios_per_sec": 0, 00:07:48.986 "rw_mbytes_per_sec": 0, 00:07:48.986 "r_mbytes_per_sec": 0, 00:07:48.986 "w_mbytes_per_sec": 0 00:07:48.986 }, 00:07:48.986 "claimed": false, 00:07:48.986 "zoned": false, 00:07:48.986 "supported_io_types": { 00:07:48.986 "read": true, 00:07:48.986 "write": true, 00:07:48.986 "unmap": true, 00:07:48.986 "flush": true, 00:07:48.986 "reset": true, 00:07:48.986 "nvme_admin": false, 00:07:48.986 "nvme_io": false, 00:07:48.986 "nvme_io_md": false, 00:07:48.986 "write_zeroes": true, 00:07:48.986 "zcopy": false, 00:07:48.987 "get_zone_info": false, 00:07:48.987 "zone_management": false, 00:07:48.987 "zone_append": false, 00:07:48.987 "compare": false, 00:07:48.987 "compare_and_write": false, 00:07:48.987 "abort": false, 00:07:48.987 "seek_hole": false, 00:07:48.987 "seek_data": false, 00:07:48.987 "copy": false, 00:07:48.987 "nvme_iov_md": false 00:07:48.987 }, 00:07:48.987 "memory_domains": [ 00:07:48.987 { 00:07:48.987 "dma_device_id": "system", 00:07:48.987 "dma_device_type": 1 00:07:48.987 }, 00:07:48.987 { 00:07:48.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.987 "dma_device_type": 2 00:07:48.987 }, 00:07:48.987 { 00:07:48.987 "dma_device_id": "system", 00:07:48.987 "dma_device_type": 1 00:07:48.987 }, 00:07:48.987 { 00:07:48.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.987 "dma_device_type": 2 00:07:48.987 } 00:07:48.987 ], 00:07:48.987 "driver_specific": { 00:07:48.987 "raid": { 00:07:48.987 "uuid": "430edf67-5dd6-4f31-bcaf-6d0ccb18d5d9", 00:07:48.987 "strip_size_kb": 64, 00:07:48.987 "state": "online", 00:07:48.987 "raid_level": "raid0", 00:07:48.987 "superblock": true, 00:07:48.987 "num_base_bdevs": 2, 00:07:48.987 "num_base_bdevs_discovered": 2, 00:07:48.987 "num_base_bdevs_operational": 2, 00:07:48.987 "base_bdevs_list": [ 00:07:48.987 { 00:07:48.987 "name": "pt1", 00:07:48.987 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:48.987 "is_configured": true, 00:07:48.987 "data_offset": 2048, 00:07:48.987 "data_size": 63488 00:07:48.987 }, 00:07:48.987 { 00:07:48.987 "name": "pt2", 00:07:48.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:48.987 "is_configured": true, 00:07:48.987 "data_offset": 2048, 00:07:48.987 "data_size": 63488 00:07:48.987 } 00:07:48.987 ] 00:07:48.987 } 00:07:48.987 } 00:07:48.987 }' 00:07:48.987 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:49.245 pt2' 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.245 [2024-11-26 18:57:15.795047] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=430edf67-5dd6-4f31-bcaf-6d0ccb18d5d9 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 430edf67-5dd6-4f31-bcaf-6d0ccb18d5d9 ']' 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.245 [2024-11-26 18:57:15.846648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:49.245 [2024-11-26 18:57:15.846690] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:49.245 [2024-11-26 18:57:15.846825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.245 [2024-11-26 18:57:15.846901] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.245 [2024-11-26 18:57:15.846922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:49.245 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.503 18:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.503 [2024-11-26 18:57:16.002762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:49.503 [2024-11-26 18:57:16.005465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:49.503 [2024-11-26 18:57:16.005567] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:49.503 [2024-11-26 18:57:16.005663] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:49.503 [2024-11-26 18:57:16.005692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:49.503 [2024-11-26 18:57:16.005713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:49.503 request: 00:07:49.503 { 00:07:49.503 "name": "raid_bdev1", 00:07:49.503 "raid_level": "raid0", 00:07:49.503 "base_bdevs": [ 00:07:49.503 "malloc1", 00:07:49.503 "malloc2" 00:07:49.503 ], 00:07:49.503 "strip_size_kb": 64, 00:07:49.503 "superblock": false, 00:07:49.503 "method": "bdev_raid_create", 00:07:49.503 "req_id": 1 00:07:49.503 } 00:07:49.503 Got JSON-RPC error response 00:07:49.503 response: 00:07:49.503 { 00:07:49.503 "code": -17, 00:07:49.503 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:49.503 } 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.503 [2024-11-26 18:57:16.074740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:49.503 [2024-11-26 18:57:16.074838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.503 [2024-11-26 18:57:16.074868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:49.503 [2024-11-26 18:57:16.074887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.503 [2024-11-26 18:57:16.078106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.503 [2024-11-26 18:57:16.078161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:49.503 [2024-11-26 18:57:16.078309] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:49.503 [2024-11-26 18:57:16.078394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:49.503 pt1 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.503 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.761 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.761 "name": "raid_bdev1", 00:07:49.761 "uuid": "430edf67-5dd6-4f31-bcaf-6d0ccb18d5d9", 00:07:49.761 "strip_size_kb": 64, 00:07:49.761 "state": "configuring", 00:07:49.761 "raid_level": "raid0", 00:07:49.761 "superblock": true, 00:07:49.761 "num_base_bdevs": 2, 00:07:49.761 "num_base_bdevs_discovered": 1, 00:07:49.761 "num_base_bdevs_operational": 2, 00:07:49.761 "base_bdevs_list": [ 00:07:49.761 { 00:07:49.761 "name": "pt1", 00:07:49.761 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:49.761 "is_configured": true, 00:07:49.761 "data_offset": 2048, 00:07:49.761 "data_size": 63488 00:07:49.761 }, 00:07:49.761 { 00:07:49.761 "name": null, 00:07:49.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:49.761 "is_configured": false, 00:07:49.761 "data_offset": 2048, 00:07:49.761 "data_size": 63488 00:07:49.761 } 00:07:49.761 ] 00:07:49.761 }' 00:07:49.761 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.761 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.019 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:50.019 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:50.019 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:50.019 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:50.019 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.019 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.277 [2024-11-26 18:57:16.642912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:50.277 [2024-11-26 18:57:16.643031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.277 [2024-11-26 18:57:16.643069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:50.277 [2024-11-26 18:57:16.643090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.277 [2024-11-26 18:57:16.643797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.277 [2024-11-26 18:57:16.643845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:50.277 [2024-11-26 18:57:16.643971] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:50.277 [2024-11-26 18:57:16.644018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:50.277 [2024-11-26 18:57:16.644179] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:50.277 [2024-11-26 18:57:16.644204] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:50.277 [2024-11-26 18:57:16.644553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:50.277 [2024-11-26 18:57:16.644757] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:50.277 [2024-11-26 18:57:16.644779] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:50.277 [2024-11-26 18:57:16.644964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.277 pt2 00:07:50.277 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.277 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:50.277 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:50.277 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:50.277 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.277 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.277 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.277 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.277 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.277 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.277 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.277 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.277 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.277 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.277 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.277 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.277 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.277 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.277 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.277 "name": "raid_bdev1", 00:07:50.277 "uuid": "430edf67-5dd6-4f31-bcaf-6d0ccb18d5d9", 00:07:50.277 "strip_size_kb": 64, 00:07:50.277 "state": "online", 00:07:50.278 "raid_level": "raid0", 00:07:50.278 "superblock": true, 00:07:50.278 "num_base_bdevs": 2, 00:07:50.278 "num_base_bdevs_discovered": 2, 00:07:50.278 "num_base_bdevs_operational": 2, 00:07:50.278 "base_bdevs_list": [ 00:07:50.278 { 00:07:50.278 "name": "pt1", 00:07:50.278 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.278 "is_configured": true, 00:07:50.278 "data_offset": 2048, 00:07:50.278 "data_size": 63488 00:07:50.278 }, 00:07:50.278 { 00:07:50.278 "name": "pt2", 00:07:50.278 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.278 "is_configured": true, 00:07:50.278 "data_offset": 2048, 00:07:50.278 "data_size": 63488 00:07:50.278 } 00:07:50.278 ] 00:07:50.278 }' 00:07:50.278 18:57:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.278 18:57:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.862 [2024-11-26 18:57:17.199412] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:50.862 "name": "raid_bdev1", 00:07:50.862 "aliases": [ 00:07:50.862 "430edf67-5dd6-4f31-bcaf-6d0ccb18d5d9" 00:07:50.862 ], 00:07:50.862 "product_name": "Raid Volume", 00:07:50.862 "block_size": 512, 00:07:50.862 "num_blocks": 126976, 00:07:50.862 "uuid": "430edf67-5dd6-4f31-bcaf-6d0ccb18d5d9", 00:07:50.862 "assigned_rate_limits": { 00:07:50.862 "rw_ios_per_sec": 0, 00:07:50.862 "rw_mbytes_per_sec": 0, 00:07:50.862 "r_mbytes_per_sec": 0, 00:07:50.862 "w_mbytes_per_sec": 0 00:07:50.862 }, 00:07:50.862 "claimed": false, 00:07:50.862 "zoned": false, 00:07:50.862 "supported_io_types": { 00:07:50.862 "read": true, 00:07:50.862 "write": true, 00:07:50.862 "unmap": true, 00:07:50.862 "flush": true, 00:07:50.862 "reset": true, 00:07:50.862 "nvme_admin": false, 00:07:50.862 "nvme_io": false, 00:07:50.862 "nvme_io_md": false, 00:07:50.862 "write_zeroes": true, 00:07:50.862 "zcopy": false, 00:07:50.862 "get_zone_info": false, 00:07:50.862 "zone_management": false, 00:07:50.862 "zone_append": false, 00:07:50.862 "compare": false, 00:07:50.862 "compare_and_write": false, 00:07:50.862 "abort": false, 00:07:50.862 "seek_hole": false, 00:07:50.862 "seek_data": false, 00:07:50.862 "copy": false, 00:07:50.862 "nvme_iov_md": false 00:07:50.862 }, 00:07:50.862 "memory_domains": [ 00:07:50.862 { 00:07:50.862 "dma_device_id": "system", 00:07:50.862 "dma_device_type": 1 00:07:50.862 }, 00:07:50.862 { 00:07:50.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.862 "dma_device_type": 2 00:07:50.862 }, 00:07:50.862 { 00:07:50.862 "dma_device_id": "system", 00:07:50.862 "dma_device_type": 1 00:07:50.862 }, 00:07:50.862 { 00:07:50.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.862 "dma_device_type": 2 00:07:50.862 } 00:07:50.862 ], 00:07:50.862 "driver_specific": { 00:07:50.862 "raid": { 00:07:50.862 "uuid": "430edf67-5dd6-4f31-bcaf-6d0ccb18d5d9", 00:07:50.862 "strip_size_kb": 64, 00:07:50.862 "state": "online", 00:07:50.862 "raid_level": "raid0", 00:07:50.862 "superblock": true, 00:07:50.862 "num_base_bdevs": 2, 00:07:50.862 "num_base_bdevs_discovered": 2, 00:07:50.862 "num_base_bdevs_operational": 2, 00:07:50.862 "base_bdevs_list": [ 00:07:50.862 { 00:07:50.862 "name": "pt1", 00:07:50.862 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.862 "is_configured": true, 00:07:50.862 "data_offset": 2048, 00:07:50.862 "data_size": 63488 00:07:50.862 }, 00:07:50.862 { 00:07:50.862 "name": "pt2", 00:07:50.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.862 "is_configured": true, 00:07:50.862 "data_offset": 2048, 00:07:50.862 "data_size": 63488 00:07:50.862 } 00:07:50.862 ] 00:07:50.862 } 00:07:50.862 } 00:07:50.862 }' 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:50.862 pt2' 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.862 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.862 [2024-11-26 18:57:17.479456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.121 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.121 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 430edf67-5dd6-4f31-bcaf-6d0ccb18d5d9 '!=' 430edf67-5dd6-4f31-bcaf-6d0ccb18d5d9 ']' 00:07:51.121 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:51.121 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:51.121 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:51.121 18:57:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61464 00:07:51.121 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61464 ']' 00:07:51.121 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61464 00:07:51.121 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:51.121 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.121 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61464 00:07:51.121 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.121 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.121 killing process with pid 61464 00:07:51.121 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61464' 00:07:51.121 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61464 00:07:51.121 18:57:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61464 00:07:51.121 [2024-11-26 18:57:17.554749] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:51.121 [2024-11-26 18:57:17.554913] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.121 [2024-11-26 18:57:17.555003] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.121 [2024-11-26 18:57:17.555027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:51.379 [2024-11-26 18:57:17.766478] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.340 18:57:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:52.340 00:07:52.340 real 0m5.128s 00:07:52.340 user 0m7.492s 00:07:52.340 sys 0m0.799s 00:07:52.340 18:57:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.340 ************************************ 00:07:52.341 END TEST raid_superblock_test 00:07:52.341 ************************************ 00:07:52.341 18:57:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.341 18:57:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:52.341 18:57:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:52.341 18:57:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.341 18:57:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.341 ************************************ 00:07:52.341 START TEST raid_read_error_test 00:07:52.341 ************************************ 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.p2X0mTAuyE 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61681 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61681 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61681 ']' 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.341 18:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.599 [2024-11-26 18:57:19.035668] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:07:52.599 [2024-11-26 18:57:19.035848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61681 ] 00:07:52.599 [2024-11-26 18:57:19.211583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.857 [2024-11-26 18:57:19.363494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.114 [2024-11-26 18:57:19.598466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.114 [2024-11-26 18:57:19.598541] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.682 BaseBdev1_malloc 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.682 true 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.682 [2024-11-26 18:57:20.077105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:53.682 [2024-11-26 18:57:20.077201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.682 [2024-11-26 18:57:20.077251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:53.682 [2024-11-26 18:57:20.077273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.682 [2024-11-26 18:57:20.080458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.682 [2024-11-26 18:57:20.080514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:53.682 BaseBdev1 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.682 BaseBdev2_malloc 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.682 true 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.682 [2024-11-26 18:57:20.150449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:53.682 [2024-11-26 18:57:20.150555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.682 [2024-11-26 18:57:20.150588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:53.682 [2024-11-26 18:57:20.150607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.682 [2024-11-26 18:57:20.153955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.682 [2024-11-26 18:57:20.154019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:53.682 BaseBdev2 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.682 [2024-11-26 18:57:20.162593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:53.682 [2024-11-26 18:57:20.165367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:53.682 [2024-11-26 18:57:20.165683] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:53.682 [2024-11-26 18:57:20.165712] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:53.682 [2024-11-26 18:57:20.166092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:53.682 [2024-11-26 18:57:20.166370] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:53.682 [2024-11-26 18:57:20.166394] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:53.682 [2024-11-26 18:57:20.166721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.682 "name": "raid_bdev1", 00:07:53.682 "uuid": "95f80959-0dc3-43f0-977c-42f0cd5f27a6", 00:07:53.682 "strip_size_kb": 64, 00:07:53.682 "state": "online", 00:07:53.682 "raid_level": "raid0", 00:07:53.682 "superblock": true, 00:07:53.682 "num_base_bdevs": 2, 00:07:53.682 "num_base_bdevs_discovered": 2, 00:07:53.682 "num_base_bdevs_operational": 2, 00:07:53.682 "base_bdevs_list": [ 00:07:53.682 { 00:07:53.682 "name": "BaseBdev1", 00:07:53.682 "uuid": "e562a61a-45ec-5bc3-84f4-905d83977cdf", 00:07:53.682 "is_configured": true, 00:07:53.682 "data_offset": 2048, 00:07:53.682 "data_size": 63488 00:07:53.682 }, 00:07:53.682 { 00:07:53.682 "name": "BaseBdev2", 00:07:53.682 "uuid": "2dd20571-8bbe-53b6-9639-205156b2bc67", 00:07:53.682 "is_configured": true, 00:07:53.682 "data_offset": 2048, 00:07:53.682 "data_size": 63488 00:07:53.682 } 00:07:53.682 ] 00:07:53.682 }' 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.682 18:57:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.249 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:54.249 18:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:54.249 [2024-11-26 18:57:20.728321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:55.183 18:57:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:55.183 18:57:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.183 18:57:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.183 18:57:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.183 18:57:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:55.183 18:57:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:55.183 18:57:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:55.183 18:57:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:55.183 18:57:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.183 18:57:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.183 18:57:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.183 18:57:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.183 18:57:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.183 18:57:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.183 18:57:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.183 18:57:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.183 18:57:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.183 18:57:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.183 18:57:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.183 18:57:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.184 18:57:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.184 18:57:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.184 18:57:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.184 "name": "raid_bdev1", 00:07:55.184 "uuid": "95f80959-0dc3-43f0-977c-42f0cd5f27a6", 00:07:55.184 "strip_size_kb": 64, 00:07:55.184 "state": "online", 00:07:55.184 "raid_level": "raid0", 00:07:55.184 "superblock": true, 00:07:55.184 "num_base_bdevs": 2, 00:07:55.184 "num_base_bdevs_discovered": 2, 00:07:55.184 "num_base_bdevs_operational": 2, 00:07:55.184 "base_bdevs_list": [ 00:07:55.184 { 00:07:55.184 "name": "BaseBdev1", 00:07:55.184 "uuid": "e562a61a-45ec-5bc3-84f4-905d83977cdf", 00:07:55.184 "is_configured": true, 00:07:55.184 "data_offset": 2048, 00:07:55.184 "data_size": 63488 00:07:55.184 }, 00:07:55.184 { 00:07:55.184 "name": "BaseBdev2", 00:07:55.184 "uuid": "2dd20571-8bbe-53b6-9639-205156b2bc67", 00:07:55.184 "is_configured": true, 00:07:55.184 "data_offset": 2048, 00:07:55.184 "data_size": 63488 00:07:55.184 } 00:07:55.184 ] 00:07:55.184 }' 00:07:55.184 18:57:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.184 18:57:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.804 18:57:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:55.804 18:57:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.804 18:57:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.804 [2024-11-26 18:57:22.204000] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:55.804 [2024-11-26 18:57:22.204198] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:55.804 [2024-11-26 18:57:22.207869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:55.804 [2024-11-26 18:57:22.208116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.804 [2024-11-26 18:57:22.208307] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:55.804 [2024-11-26 18:57:22.208474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, sta{ 00:07:55.804 "results": [ 00:07:55.804 { 00:07:55.804 "job": "raid_bdev1", 00:07:55.804 "core_mask": "0x1", 00:07:55.804 "workload": "randrw", 00:07:55.804 "percentage": 50, 00:07:55.804 "status": "finished", 00:07:55.804 "queue_depth": 1, 00:07:55.804 "io_size": 131072, 00:07:55.804 "runtime": 1.473148, 00:07:55.804 "iops": 9471.553435228503, 00:07:55.804 "mibps": 1183.9441794035629, 00:07:55.804 "io_failed": 1, 00:07:55.804 "io_timeout": 0, 00:07:55.804 "avg_latency_us": 148.78154976741763, 00:07:55.804 "min_latency_us": 44.68363636363637, 00:07:55.804 "max_latency_us": 1854.370909090909 00:07:55.804 } 00:07:55.804 ], 00:07:55.804 "core_count": 1 00:07:55.804 } 00:07:55.804 te offline 00:07:55.804 18:57:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.804 18:57:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61681 00:07:55.804 18:57:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61681 ']' 00:07:55.804 18:57:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61681 00:07:55.804 18:57:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:55.804 18:57:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.804 18:57:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61681 00:07:55.804 18:57:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.804 18:57:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.804 18:57:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61681' 00:07:55.804 killing process with pid 61681 00:07:55.804 18:57:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61681 00:07:55.804 18:57:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61681 00:07:55.804 [2024-11-26 18:57:22.247905] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:55.804 [2024-11-26 18:57:22.383927] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:57.181 18:57:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:57.181 18:57:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.p2X0mTAuyE 00:07:57.181 18:57:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:57.181 18:57:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.68 00:07:57.181 18:57:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:57.181 18:57:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:57.181 18:57:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:57.181 18:57:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.68 != \0\.\0\0 ]] 00:07:57.181 00:07:57.181 real 0m4.708s 00:07:57.181 user 0m5.772s 00:07:57.181 sys 0m0.610s 00:07:57.181 18:57:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.181 ************************************ 00:07:57.181 END TEST raid_read_error_test 00:07:57.181 ************************************ 00:07:57.181 18:57:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.181 18:57:23 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:57.181 18:57:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:57.181 18:57:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.181 18:57:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.181 ************************************ 00:07:57.181 START TEST raid_write_error_test 00:07:57.181 ************************************ 00:07:57.181 18:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:57.181 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:57.181 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:57.181 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:57.181 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:57.181 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.181 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:57.181 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.181 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.181 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:57.181 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.181 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.181 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:57.181 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:57.181 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:57.181 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:57.181 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:57.181 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:57.181 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:57.181 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:57.181 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:57.182 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:57.182 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:57.182 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Ac25mV6KB9 00:07:57.182 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61827 00:07:57.182 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:57.182 18:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61827 00:07:57.182 18:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61827 ']' 00:07:57.182 18:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.182 18:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.182 18:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.182 18:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.182 18:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.182 [2024-11-26 18:57:23.787173] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:07:57.182 [2024-11-26 18:57:23.787377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61827 ] 00:07:57.440 [2024-11-26 18:57:23.964734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.698 [2024-11-26 18:57:24.149161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.956 [2024-11-26 18:57:24.376276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.956 [2024-11-26 18:57:24.376609] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.523 BaseBdev1_malloc 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.523 true 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.523 [2024-11-26 18:57:24.935171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:58.523 [2024-11-26 18:57:24.935255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.523 [2024-11-26 18:57:24.935309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:58.523 [2024-11-26 18:57:24.935332] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.523 [2024-11-26 18:57:24.938338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.523 [2024-11-26 18:57:24.938392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:58.523 BaseBdev1 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.523 BaseBdev2_malloc 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.523 true 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.523 18:57:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.523 [2024-11-26 18:57:24.999526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:58.523 [2024-11-26 18:57:24.999755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.523 [2024-11-26 18:57:24.999793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:58.523 [2024-11-26 18:57:24.999811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.523 [2024-11-26 18:57:25.002771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.523 [2024-11-26 18:57:25.002956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:58.523 BaseBdev2 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.523 [2024-11-26 18:57:25.007720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.523 [2024-11-26 18:57:25.010330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.523 [2024-11-26 18:57:25.010593] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:58.523 [2024-11-26 18:57:25.010627] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:58.523 [2024-11-26 18:57:25.010946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:58.523 [2024-11-26 18:57:25.011176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:58.523 [2024-11-26 18:57:25.011205] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:58.523 [2024-11-26 18:57:25.011439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.523 "name": "raid_bdev1", 00:07:58.523 "uuid": "f1acc861-1057-4454-85d0-daeb4bbc1bb9", 00:07:58.523 "strip_size_kb": 64, 00:07:58.523 "state": "online", 00:07:58.523 "raid_level": "raid0", 00:07:58.523 "superblock": true, 00:07:58.523 "num_base_bdevs": 2, 00:07:58.523 "num_base_bdevs_discovered": 2, 00:07:58.523 "num_base_bdevs_operational": 2, 00:07:58.523 "base_bdevs_list": [ 00:07:58.523 { 00:07:58.523 "name": "BaseBdev1", 00:07:58.523 "uuid": "bc6c95d0-d632-548e-82f9-d462d2ff13d0", 00:07:58.523 "is_configured": true, 00:07:58.523 "data_offset": 2048, 00:07:58.523 "data_size": 63488 00:07:58.523 }, 00:07:58.523 { 00:07:58.523 "name": "BaseBdev2", 00:07:58.523 "uuid": "771460f5-4048-52a6-8159-b951210c9112", 00:07:58.523 "is_configured": true, 00:07:58.523 "data_offset": 2048, 00:07:58.523 "data_size": 63488 00:07:58.523 } 00:07:58.523 ] 00:07:58.523 }' 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.523 18:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.088 18:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:59.088 18:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:59.088 [2024-11-26 18:57:25.625355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.024 18:57:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.024 "name": "raid_bdev1", 00:08:00.024 "uuid": "f1acc861-1057-4454-85d0-daeb4bbc1bb9", 00:08:00.024 "strip_size_kb": 64, 00:08:00.024 "state": "online", 00:08:00.024 "raid_level": "raid0", 00:08:00.024 "superblock": true, 00:08:00.024 "num_base_bdevs": 2, 00:08:00.024 "num_base_bdevs_discovered": 2, 00:08:00.024 "num_base_bdevs_operational": 2, 00:08:00.024 "base_bdevs_list": [ 00:08:00.024 { 00:08:00.024 "name": "BaseBdev1", 00:08:00.024 "uuid": "bc6c95d0-d632-548e-82f9-d462d2ff13d0", 00:08:00.025 "is_configured": true, 00:08:00.025 "data_offset": 2048, 00:08:00.025 "data_size": 63488 00:08:00.025 }, 00:08:00.025 { 00:08:00.025 "name": "BaseBdev2", 00:08:00.025 "uuid": "771460f5-4048-52a6-8159-b951210c9112", 00:08:00.025 "is_configured": true, 00:08:00.025 "data_offset": 2048, 00:08:00.025 "data_size": 63488 00:08:00.025 } 00:08:00.025 ] 00:08:00.025 }' 00:08:00.025 18:57:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.025 18:57:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.640 18:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:00.640 18:57:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.640 18:57:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.640 [2024-11-26 18:57:27.057772] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:00.640 [2024-11-26 18:57:27.057987] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:00.640 [2024-11-26 18:57:27.061620] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.640 [2024-11-26 18:57:27.061803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.640 [2024-11-26 18:57:27.061866] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.640 [2024-11-26 18:57:27.061888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:00.640 { 00:08:00.640 "results": [ 00:08:00.640 { 00:08:00.640 "job": "raid_bdev1", 00:08:00.640 "core_mask": "0x1", 00:08:00.640 "workload": "randrw", 00:08:00.640 "percentage": 50, 00:08:00.640 "status": "finished", 00:08:00.640 "queue_depth": 1, 00:08:00.640 "io_size": 131072, 00:08:00.640 "runtime": 1.430172, 00:08:00.640 "iops": 9699.532643626082, 00:08:00.640 "mibps": 1212.4415804532603, 00:08:00.640 "io_failed": 1, 00:08:00.640 "io_timeout": 0, 00:08:00.640 "avg_latency_us": 145.0998515101276, 00:08:00.640 "min_latency_us": 43.985454545454544, 00:08:00.640 "max_latency_us": 1854.370909090909 00:08:00.640 } 00:08:00.640 ], 00:08:00.640 "core_count": 1 00:08:00.640 } 00:08:00.640 18:57:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.640 18:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61827 00:08:00.640 18:57:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61827 ']' 00:08:00.640 18:57:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61827 00:08:00.640 18:57:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:00.640 18:57:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.640 18:57:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61827 00:08:00.640 killing process with pid 61827 00:08:00.640 18:57:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.640 18:57:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.640 18:57:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61827' 00:08:00.640 18:57:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61827 00:08:00.640 [2024-11-26 18:57:27.098560] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.640 18:57:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61827 00:08:00.640 [2024-11-26 18:57:27.232918] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:02.015 18:57:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:02.015 18:57:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Ac25mV6KB9 00:08:02.015 18:57:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:02.015 18:57:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:08:02.015 18:57:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:02.015 18:57:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:02.015 18:57:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:02.015 18:57:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:08:02.015 00:08:02.015 real 0m4.765s 00:08:02.015 user 0m5.942s 00:08:02.015 sys 0m0.605s 00:08:02.015 18:57:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.015 ************************************ 00:08:02.015 END TEST raid_write_error_test 00:08:02.015 ************************************ 00:08:02.015 18:57:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.015 18:57:28 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:02.015 18:57:28 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:02.015 18:57:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:02.015 18:57:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.015 18:57:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:02.015 ************************************ 00:08:02.015 START TEST raid_state_function_test 00:08:02.015 ************************************ 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:02.015 Process raid pid: 61970 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61970 00:08:02.015 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61970' 00:08:02.016 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:02.016 18:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61970 00:08:02.016 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61970 ']' 00:08:02.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.016 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.016 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.016 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.016 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.016 18:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.016 [2024-11-26 18:57:28.609707] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:08:02.016 [2024-11-26 18:57:28.609896] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.274 [2024-11-26 18:57:28.800811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.533 [2024-11-26 18:57:28.963755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.790 [2024-11-26 18:57:29.198143] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.790 [2024-11-26 18:57:29.198202] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.357 [2024-11-26 18:57:29.716030] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:03.357 [2024-11-26 18:57:29.716106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:03.357 [2024-11-26 18:57:29.716126] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.357 [2024-11-26 18:57:29.716143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.357 "name": "Existed_Raid", 00:08:03.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.357 "strip_size_kb": 64, 00:08:03.357 "state": "configuring", 00:08:03.357 "raid_level": "concat", 00:08:03.357 "superblock": false, 00:08:03.357 "num_base_bdevs": 2, 00:08:03.357 "num_base_bdevs_discovered": 0, 00:08:03.357 "num_base_bdevs_operational": 2, 00:08:03.357 "base_bdevs_list": [ 00:08:03.357 { 00:08:03.357 "name": "BaseBdev1", 00:08:03.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.357 "is_configured": false, 00:08:03.357 "data_offset": 0, 00:08:03.357 "data_size": 0 00:08:03.357 }, 00:08:03.357 { 00:08:03.357 "name": "BaseBdev2", 00:08:03.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.357 "is_configured": false, 00:08:03.357 "data_offset": 0, 00:08:03.357 "data_size": 0 00:08:03.357 } 00:08:03.357 ] 00:08:03.357 }' 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.357 18:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.615 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:03.615 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.615 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.615 [2024-11-26 18:57:30.216188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:03.615 [2024-11-26 18:57:30.216256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:03.615 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.615 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:03.615 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.615 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.615 [2024-11-26 18:57:30.224178] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:03.615 [2024-11-26 18:57:30.224260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:03.615 [2024-11-26 18:57:30.224310] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.615 [2024-11-26 18:57:30.224348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.615 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.615 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:03.615 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.615 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.872 [2024-11-26 18:57:30.275688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:03.872 BaseBdev1 00:08:03.872 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.872 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:03.872 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:03.872 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:03.872 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:03.872 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:03.872 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:03.872 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:03.872 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.872 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.872 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.872 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:03.872 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.872 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.872 [ 00:08:03.872 { 00:08:03.872 "name": "BaseBdev1", 00:08:03.872 "aliases": [ 00:08:03.872 "af6105ec-8b18-4ba0-ab7d-e9f32449c3c0" 00:08:03.872 ], 00:08:03.872 "product_name": "Malloc disk", 00:08:03.872 "block_size": 512, 00:08:03.872 "num_blocks": 65536, 00:08:03.872 "uuid": "af6105ec-8b18-4ba0-ab7d-e9f32449c3c0", 00:08:03.872 "assigned_rate_limits": { 00:08:03.872 "rw_ios_per_sec": 0, 00:08:03.872 "rw_mbytes_per_sec": 0, 00:08:03.872 "r_mbytes_per_sec": 0, 00:08:03.872 "w_mbytes_per_sec": 0 00:08:03.872 }, 00:08:03.872 "claimed": true, 00:08:03.872 "claim_type": "exclusive_write", 00:08:03.872 "zoned": false, 00:08:03.872 "supported_io_types": { 00:08:03.872 "read": true, 00:08:03.872 "write": true, 00:08:03.872 "unmap": true, 00:08:03.872 "flush": true, 00:08:03.872 "reset": true, 00:08:03.872 "nvme_admin": false, 00:08:03.872 "nvme_io": false, 00:08:03.872 "nvme_io_md": false, 00:08:03.872 "write_zeroes": true, 00:08:03.872 "zcopy": true, 00:08:03.872 "get_zone_info": false, 00:08:03.872 "zone_management": false, 00:08:03.872 "zone_append": false, 00:08:03.873 "compare": false, 00:08:03.873 "compare_and_write": false, 00:08:03.873 "abort": true, 00:08:03.873 "seek_hole": false, 00:08:03.873 "seek_data": false, 00:08:03.873 "copy": true, 00:08:03.873 "nvme_iov_md": false 00:08:03.873 }, 00:08:03.873 "memory_domains": [ 00:08:03.873 { 00:08:03.873 "dma_device_id": "system", 00:08:03.873 "dma_device_type": 1 00:08:03.873 }, 00:08:03.873 { 00:08:03.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.873 "dma_device_type": 2 00:08:03.873 } 00:08:03.873 ], 00:08:03.873 "driver_specific": {} 00:08:03.873 } 00:08:03.873 ] 00:08:03.873 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.873 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:03.873 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:03.873 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.873 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.873 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:03.873 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.873 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.873 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.873 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.873 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.873 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.873 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.873 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.873 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.873 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.873 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.873 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.873 "name": "Existed_Raid", 00:08:03.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.873 "strip_size_kb": 64, 00:08:03.873 "state": "configuring", 00:08:03.873 "raid_level": "concat", 00:08:03.873 "superblock": false, 00:08:03.873 "num_base_bdevs": 2, 00:08:03.873 "num_base_bdevs_discovered": 1, 00:08:03.873 "num_base_bdevs_operational": 2, 00:08:03.873 "base_bdevs_list": [ 00:08:03.873 { 00:08:03.873 "name": "BaseBdev1", 00:08:03.873 "uuid": "af6105ec-8b18-4ba0-ab7d-e9f32449c3c0", 00:08:03.873 "is_configured": true, 00:08:03.873 "data_offset": 0, 00:08:03.873 "data_size": 65536 00:08:03.873 }, 00:08:03.873 { 00:08:03.873 "name": "BaseBdev2", 00:08:03.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.873 "is_configured": false, 00:08:03.873 "data_offset": 0, 00:08:03.873 "data_size": 0 00:08:03.873 } 00:08:03.873 ] 00:08:03.873 }' 00:08:03.873 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.873 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.437 [2024-11-26 18:57:30.828706] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:04.437 [2024-11-26 18:57:30.828782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.437 [2024-11-26 18:57:30.840833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.437 [2024-11-26 18:57:30.843585] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.437 [2024-11-26 18:57:30.843832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.437 "name": "Existed_Raid", 00:08:04.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.437 "strip_size_kb": 64, 00:08:04.437 "state": "configuring", 00:08:04.437 "raid_level": "concat", 00:08:04.437 "superblock": false, 00:08:04.437 "num_base_bdevs": 2, 00:08:04.437 "num_base_bdevs_discovered": 1, 00:08:04.437 "num_base_bdevs_operational": 2, 00:08:04.437 "base_bdevs_list": [ 00:08:04.437 { 00:08:04.437 "name": "BaseBdev1", 00:08:04.437 "uuid": "af6105ec-8b18-4ba0-ab7d-e9f32449c3c0", 00:08:04.437 "is_configured": true, 00:08:04.437 "data_offset": 0, 00:08:04.437 "data_size": 65536 00:08:04.437 }, 00:08:04.437 { 00:08:04.437 "name": "BaseBdev2", 00:08:04.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.437 "is_configured": false, 00:08:04.437 "data_offset": 0, 00:08:04.437 "data_size": 0 00:08:04.437 } 00:08:04.437 ] 00:08:04.437 }' 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.437 18:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.035 [2024-11-26 18:57:31.448458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:05.035 [2024-11-26 18:57:31.448807] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:05.035 [2024-11-26 18:57:31.448832] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:05.035 [2024-11-26 18:57:31.449195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:05.035 [2024-11-26 18:57:31.449483] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:05.035 [2024-11-26 18:57:31.449506] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:05.035 [2024-11-26 18:57:31.449856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.035 BaseBdev2 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.035 [ 00:08:05.035 { 00:08:05.035 "name": "BaseBdev2", 00:08:05.035 "aliases": [ 00:08:05.035 "0dd5ae67-5ea6-4b6d-b0b5-a02278094663" 00:08:05.035 ], 00:08:05.035 "product_name": "Malloc disk", 00:08:05.035 "block_size": 512, 00:08:05.035 "num_blocks": 65536, 00:08:05.035 "uuid": "0dd5ae67-5ea6-4b6d-b0b5-a02278094663", 00:08:05.035 "assigned_rate_limits": { 00:08:05.035 "rw_ios_per_sec": 0, 00:08:05.035 "rw_mbytes_per_sec": 0, 00:08:05.035 "r_mbytes_per_sec": 0, 00:08:05.035 "w_mbytes_per_sec": 0 00:08:05.035 }, 00:08:05.035 "claimed": true, 00:08:05.035 "claim_type": "exclusive_write", 00:08:05.035 "zoned": false, 00:08:05.035 "supported_io_types": { 00:08:05.035 "read": true, 00:08:05.035 "write": true, 00:08:05.035 "unmap": true, 00:08:05.035 "flush": true, 00:08:05.035 "reset": true, 00:08:05.035 "nvme_admin": false, 00:08:05.035 "nvme_io": false, 00:08:05.035 "nvme_io_md": false, 00:08:05.035 "write_zeroes": true, 00:08:05.035 "zcopy": true, 00:08:05.035 "get_zone_info": false, 00:08:05.035 "zone_management": false, 00:08:05.035 "zone_append": false, 00:08:05.035 "compare": false, 00:08:05.035 "compare_and_write": false, 00:08:05.035 "abort": true, 00:08:05.035 "seek_hole": false, 00:08:05.035 "seek_data": false, 00:08:05.035 "copy": true, 00:08:05.035 "nvme_iov_md": false 00:08:05.035 }, 00:08:05.035 "memory_domains": [ 00:08:05.035 { 00:08:05.035 "dma_device_id": "system", 00:08:05.035 "dma_device_type": 1 00:08:05.035 }, 00:08:05.035 { 00:08:05.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.035 "dma_device_type": 2 00:08:05.035 } 00:08:05.035 ], 00:08:05.035 "driver_specific": {} 00:08:05.035 } 00:08:05.035 ] 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.035 "name": "Existed_Raid", 00:08:05.035 "uuid": "8f7672d3-7fbf-49cb-aa14-9b0db9fe23fc", 00:08:05.035 "strip_size_kb": 64, 00:08:05.035 "state": "online", 00:08:05.035 "raid_level": "concat", 00:08:05.035 "superblock": false, 00:08:05.035 "num_base_bdevs": 2, 00:08:05.035 "num_base_bdevs_discovered": 2, 00:08:05.035 "num_base_bdevs_operational": 2, 00:08:05.035 "base_bdevs_list": [ 00:08:05.035 { 00:08:05.035 "name": "BaseBdev1", 00:08:05.035 "uuid": "af6105ec-8b18-4ba0-ab7d-e9f32449c3c0", 00:08:05.035 "is_configured": true, 00:08:05.035 "data_offset": 0, 00:08:05.035 "data_size": 65536 00:08:05.035 }, 00:08:05.035 { 00:08:05.035 "name": "BaseBdev2", 00:08:05.035 "uuid": "0dd5ae67-5ea6-4b6d-b0b5-a02278094663", 00:08:05.035 "is_configured": true, 00:08:05.035 "data_offset": 0, 00:08:05.035 "data_size": 65536 00:08:05.035 } 00:08:05.035 ] 00:08:05.035 }' 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.035 18:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:05.601 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:05.601 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:05.601 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:05.601 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:05.601 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:05.601 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:05.601 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.601 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:05.601 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 [2024-11-26 18:57:32.065066] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.601 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.601 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:05.601 "name": "Existed_Raid", 00:08:05.601 "aliases": [ 00:08:05.601 "8f7672d3-7fbf-49cb-aa14-9b0db9fe23fc" 00:08:05.601 ], 00:08:05.601 "product_name": "Raid Volume", 00:08:05.601 "block_size": 512, 00:08:05.601 "num_blocks": 131072, 00:08:05.601 "uuid": "8f7672d3-7fbf-49cb-aa14-9b0db9fe23fc", 00:08:05.601 "assigned_rate_limits": { 00:08:05.601 "rw_ios_per_sec": 0, 00:08:05.601 "rw_mbytes_per_sec": 0, 00:08:05.601 "r_mbytes_per_sec": 0, 00:08:05.601 "w_mbytes_per_sec": 0 00:08:05.601 }, 00:08:05.601 "claimed": false, 00:08:05.601 "zoned": false, 00:08:05.601 "supported_io_types": { 00:08:05.601 "read": true, 00:08:05.601 "write": true, 00:08:05.601 "unmap": true, 00:08:05.601 "flush": true, 00:08:05.601 "reset": true, 00:08:05.601 "nvme_admin": false, 00:08:05.601 "nvme_io": false, 00:08:05.601 "nvme_io_md": false, 00:08:05.601 "write_zeroes": true, 00:08:05.601 "zcopy": false, 00:08:05.601 "get_zone_info": false, 00:08:05.601 "zone_management": false, 00:08:05.601 "zone_append": false, 00:08:05.601 "compare": false, 00:08:05.601 "compare_and_write": false, 00:08:05.601 "abort": false, 00:08:05.601 "seek_hole": false, 00:08:05.601 "seek_data": false, 00:08:05.601 "copy": false, 00:08:05.601 "nvme_iov_md": false 00:08:05.601 }, 00:08:05.601 "memory_domains": [ 00:08:05.601 { 00:08:05.601 "dma_device_id": "system", 00:08:05.601 "dma_device_type": 1 00:08:05.601 }, 00:08:05.601 { 00:08:05.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.601 "dma_device_type": 2 00:08:05.601 }, 00:08:05.601 { 00:08:05.601 "dma_device_id": "system", 00:08:05.601 "dma_device_type": 1 00:08:05.601 }, 00:08:05.601 { 00:08:05.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.601 "dma_device_type": 2 00:08:05.601 } 00:08:05.601 ], 00:08:05.601 "driver_specific": { 00:08:05.601 "raid": { 00:08:05.601 "uuid": "8f7672d3-7fbf-49cb-aa14-9b0db9fe23fc", 00:08:05.601 "strip_size_kb": 64, 00:08:05.601 "state": "online", 00:08:05.601 "raid_level": "concat", 00:08:05.601 "superblock": false, 00:08:05.601 "num_base_bdevs": 2, 00:08:05.601 "num_base_bdevs_discovered": 2, 00:08:05.601 "num_base_bdevs_operational": 2, 00:08:05.601 "base_bdevs_list": [ 00:08:05.601 { 00:08:05.601 "name": "BaseBdev1", 00:08:05.601 "uuid": "af6105ec-8b18-4ba0-ab7d-e9f32449c3c0", 00:08:05.601 "is_configured": true, 00:08:05.601 "data_offset": 0, 00:08:05.601 "data_size": 65536 00:08:05.601 }, 00:08:05.601 { 00:08:05.601 "name": "BaseBdev2", 00:08:05.601 "uuid": "0dd5ae67-5ea6-4b6d-b0b5-a02278094663", 00:08:05.601 "is_configured": true, 00:08:05.601 "data_offset": 0, 00:08:05.601 "data_size": 65536 00:08:05.601 } 00:08:05.601 ] 00:08:05.601 } 00:08:05.601 } 00:08:05.601 }' 00:08:05.601 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:05.601 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:05.601 BaseBdev2' 00:08:05.601 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.860 [2024-11-26 18:57:32.356883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:05.860 [2024-11-26 18:57:32.356944] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:05.860 [2024-11-26 18:57:32.357028] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.860 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.118 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.118 "name": "Existed_Raid", 00:08:06.119 "uuid": "8f7672d3-7fbf-49cb-aa14-9b0db9fe23fc", 00:08:06.119 "strip_size_kb": 64, 00:08:06.119 "state": "offline", 00:08:06.119 "raid_level": "concat", 00:08:06.119 "superblock": false, 00:08:06.119 "num_base_bdevs": 2, 00:08:06.119 "num_base_bdevs_discovered": 1, 00:08:06.119 "num_base_bdevs_operational": 1, 00:08:06.119 "base_bdevs_list": [ 00:08:06.119 { 00:08:06.119 "name": null, 00:08:06.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.119 "is_configured": false, 00:08:06.119 "data_offset": 0, 00:08:06.119 "data_size": 65536 00:08:06.119 }, 00:08:06.119 { 00:08:06.119 "name": "BaseBdev2", 00:08:06.119 "uuid": "0dd5ae67-5ea6-4b6d-b0b5-a02278094663", 00:08:06.119 "is_configured": true, 00:08:06.119 "data_offset": 0, 00:08:06.119 "data_size": 65536 00:08:06.119 } 00:08:06.119 ] 00:08:06.119 }' 00:08:06.119 18:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.119 18:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.685 [2024-11-26 18:57:33.076093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:06.685 [2024-11-26 18:57:33.076211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61970 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61970 ']' 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61970 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61970 00:08:06.685 killing process with pid 61970 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61970' 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61970 00:08:06.685 18:57:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61970 00:08:06.685 [2024-11-26 18:57:33.277414] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:06.685 [2024-11-26 18:57:33.293360] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:08.060 00:08:08.060 real 0m5.894s 00:08:08.060 user 0m8.891s 00:08:08.060 sys 0m0.866s 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.060 ************************************ 00:08:08.060 END TEST raid_state_function_test 00:08:08.060 ************************************ 00:08:08.060 18:57:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:08.060 18:57:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:08.060 18:57:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.060 18:57:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:08.060 ************************************ 00:08:08.060 START TEST raid_state_function_test_sb 00:08:08.060 ************************************ 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:08.060 Process raid pid: 62229 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62229 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62229' 00:08:08.060 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:08.061 18:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62229 00:08:08.061 18:57:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62229 ']' 00:08:08.061 18:57:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.061 18:57:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.061 18:57:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.061 18:57:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.061 18:57:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.061 [2024-11-26 18:57:34.583129] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:08:08.061 [2024-11-26 18:57:34.583714] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.320 [2024-11-26 18:57:34.789139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.320 [2024-11-26 18:57:34.924247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.579 [2024-11-26 18:57:35.136701] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.579 [2024-11-26 18:57:35.137018] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.146 [2024-11-26 18:57:35.575383] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.146 [2024-11-26 18:57:35.575781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.146 [2024-11-26 18:57:35.575926] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.146 [2024-11-26 18:57:35.575990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.146 "name": "Existed_Raid", 00:08:09.146 "uuid": "cb1caf5f-6650-4718-a968-4dd70e5aa601", 00:08:09.146 "strip_size_kb": 64, 00:08:09.146 "state": "configuring", 00:08:09.146 "raid_level": "concat", 00:08:09.146 "superblock": true, 00:08:09.146 "num_base_bdevs": 2, 00:08:09.146 "num_base_bdevs_discovered": 0, 00:08:09.146 "num_base_bdevs_operational": 2, 00:08:09.146 "base_bdevs_list": [ 00:08:09.146 { 00:08:09.146 "name": "BaseBdev1", 00:08:09.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.146 "is_configured": false, 00:08:09.146 "data_offset": 0, 00:08:09.146 "data_size": 0 00:08:09.146 }, 00:08:09.146 { 00:08:09.146 "name": "BaseBdev2", 00:08:09.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.146 "is_configured": false, 00:08:09.146 "data_offset": 0, 00:08:09.146 "data_size": 0 00:08:09.146 } 00:08:09.146 ] 00:08:09.146 }' 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.146 18:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.725 [2024-11-26 18:57:36.091464] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:09.725 [2024-11-26 18:57:36.091549] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.725 [2024-11-26 18:57:36.103467] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.725 [2024-11-26 18:57:36.103566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.725 [2024-11-26 18:57:36.103583] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.725 [2024-11-26 18:57:36.103604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.725 [2024-11-26 18:57:36.153148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.725 BaseBdev1 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.725 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:09.726 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.726 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.726 [ 00:08:09.726 { 00:08:09.726 "name": "BaseBdev1", 00:08:09.726 "aliases": [ 00:08:09.726 "ccae0acb-e5c1-4c4c-8323-90bef6889271" 00:08:09.726 ], 00:08:09.726 "product_name": "Malloc disk", 00:08:09.726 "block_size": 512, 00:08:09.726 "num_blocks": 65536, 00:08:09.726 "uuid": "ccae0acb-e5c1-4c4c-8323-90bef6889271", 00:08:09.726 "assigned_rate_limits": { 00:08:09.726 "rw_ios_per_sec": 0, 00:08:09.726 "rw_mbytes_per_sec": 0, 00:08:09.726 "r_mbytes_per_sec": 0, 00:08:09.726 "w_mbytes_per_sec": 0 00:08:09.726 }, 00:08:09.726 "claimed": true, 00:08:09.726 "claim_type": "exclusive_write", 00:08:09.726 "zoned": false, 00:08:09.726 "supported_io_types": { 00:08:09.726 "read": true, 00:08:09.726 "write": true, 00:08:09.726 "unmap": true, 00:08:09.726 "flush": true, 00:08:09.726 "reset": true, 00:08:09.726 "nvme_admin": false, 00:08:09.726 "nvme_io": false, 00:08:09.726 "nvme_io_md": false, 00:08:09.726 "write_zeroes": true, 00:08:09.726 "zcopy": true, 00:08:09.726 "get_zone_info": false, 00:08:09.726 "zone_management": false, 00:08:09.726 "zone_append": false, 00:08:09.726 "compare": false, 00:08:09.726 "compare_and_write": false, 00:08:09.726 "abort": true, 00:08:09.726 "seek_hole": false, 00:08:09.726 "seek_data": false, 00:08:09.726 "copy": true, 00:08:09.726 "nvme_iov_md": false 00:08:09.726 }, 00:08:09.726 "memory_domains": [ 00:08:09.726 { 00:08:09.726 "dma_device_id": "system", 00:08:09.726 "dma_device_type": 1 00:08:09.726 }, 00:08:09.726 { 00:08:09.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.726 "dma_device_type": 2 00:08:09.726 } 00:08:09.726 ], 00:08:09.726 "driver_specific": {} 00:08:09.726 } 00:08:09.726 ] 00:08:09.726 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.726 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:09.726 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:09.726 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.726 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.726 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.726 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.726 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.726 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.726 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.726 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.726 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.726 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.726 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.726 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.726 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.726 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.726 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.726 "name": "Existed_Raid", 00:08:09.726 "uuid": "e779518b-abad-44d8-a958-db6c44084819", 00:08:09.726 "strip_size_kb": 64, 00:08:09.726 "state": "configuring", 00:08:09.726 "raid_level": "concat", 00:08:09.726 "superblock": true, 00:08:09.726 "num_base_bdevs": 2, 00:08:09.726 "num_base_bdevs_discovered": 1, 00:08:09.726 "num_base_bdevs_operational": 2, 00:08:09.726 "base_bdevs_list": [ 00:08:09.726 { 00:08:09.726 "name": "BaseBdev1", 00:08:09.726 "uuid": "ccae0acb-e5c1-4c4c-8323-90bef6889271", 00:08:09.726 "is_configured": true, 00:08:09.726 "data_offset": 2048, 00:08:09.726 "data_size": 63488 00:08:09.726 }, 00:08:09.726 { 00:08:09.726 "name": "BaseBdev2", 00:08:09.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.726 "is_configured": false, 00:08:09.726 "data_offset": 0, 00:08:09.726 "data_size": 0 00:08:09.726 } 00:08:09.726 ] 00:08:09.726 }' 00:08:09.726 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.726 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.292 [2024-11-26 18:57:36.705410] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:10.292 [2024-11-26 18:57:36.705504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.292 [2024-11-26 18:57:36.717509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.292 [2024-11-26 18:57:36.720061] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.292 [2024-11-26 18:57:36.720133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.292 "name": "Existed_Raid", 00:08:10.292 "uuid": "3c728881-c4eb-425a-b9a2-3bc1537925ea", 00:08:10.292 "strip_size_kb": 64, 00:08:10.292 "state": "configuring", 00:08:10.292 "raid_level": "concat", 00:08:10.292 "superblock": true, 00:08:10.292 "num_base_bdevs": 2, 00:08:10.292 "num_base_bdevs_discovered": 1, 00:08:10.292 "num_base_bdevs_operational": 2, 00:08:10.292 "base_bdevs_list": [ 00:08:10.292 { 00:08:10.292 "name": "BaseBdev1", 00:08:10.292 "uuid": "ccae0acb-e5c1-4c4c-8323-90bef6889271", 00:08:10.292 "is_configured": true, 00:08:10.292 "data_offset": 2048, 00:08:10.292 "data_size": 63488 00:08:10.292 }, 00:08:10.292 { 00:08:10.292 "name": "BaseBdev2", 00:08:10.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.292 "is_configured": false, 00:08:10.292 "data_offset": 0, 00:08:10.292 "data_size": 0 00:08:10.292 } 00:08:10.292 ] 00:08:10.292 }' 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.292 18:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.859 [2024-11-26 18:57:37.260504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:10.859 [2024-11-26 18:57:37.261785] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:10.859 [2024-11-26 18:57:37.261814] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:10.859 BaseBdev2 00:08:10.859 [2024-11-26 18:57:37.262166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:10.859 [2024-11-26 18:57:37.262399] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:10.859 [2024-11-26 18:57:37.262424] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:10.859 [2024-11-26 18:57:37.262604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.859 [ 00:08:10.859 { 00:08:10.859 "name": "BaseBdev2", 00:08:10.859 "aliases": [ 00:08:10.859 "4ce76525-c566-4448-86ab-c1ddc5243623" 00:08:10.859 ], 00:08:10.859 "product_name": "Malloc disk", 00:08:10.859 "block_size": 512, 00:08:10.859 "num_blocks": 65536, 00:08:10.859 "uuid": "4ce76525-c566-4448-86ab-c1ddc5243623", 00:08:10.859 "assigned_rate_limits": { 00:08:10.859 "rw_ios_per_sec": 0, 00:08:10.859 "rw_mbytes_per_sec": 0, 00:08:10.859 "r_mbytes_per_sec": 0, 00:08:10.859 "w_mbytes_per_sec": 0 00:08:10.859 }, 00:08:10.859 "claimed": true, 00:08:10.859 "claim_type": "exclusive_write", 00:08:10.859 "zoned": false, 00:08:10.859 "supported_io_types": { 00:08:10.859 "read": true, 00:08:10.859 "write": true, 00:08:10.859 "unmap": true, 00:08:10.859 "flush": true, 00:08:10.859 "reset": true, 00:08:10.859 "nvme_admin": false, 00:08:10.859 "nvme_io": false, 00:08:10.859 "nvme_io_md": false, 00:08:10.859 "write_zeroes": true, 00:08:10.859 "zcopy": true, 00:08:10.859 "get_zone_info": false, 00:08:10.859 "zone_management": false, 00:08:10.859 "zone_append": false, 00:08:10.859 "compare": false, 00:08:10.859 "compare_and_write": false, 00:08:10.859 "abort": true, 00:08:10.859 "seek_hole": false, 00:08:10.859 "seek_data": false, 00:08:10.859 "copy": true, 00:08:10.859 "nvme_iov_md": false 00:08:10.859 }, 00:08:10.859 "memory_domains": [ 00:08:10.859 { 00:08:10.859 "dma_device_id": "system", 00:08:10.859 "dma_device_type": 1 00:08:10.859 }, 00:08:10.859 { 00:08:10.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.859 "dma_device_type": 2 00:08:10.859 } 00:08:10.859 ], 00:08:10.859 "driver_specific": {} 00:08:10.859 } 00:08:10.859 ] 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.859 "name": "Existed_Raid", 00:08:10.859 "uuid": "3c728881-c4eb-425a-b9a2-3bc1537925ea", 00:08:10.859 "strip_size_kb": 64, 00:08:10.859 "state": "online", 00:08:10.859 "raid_level": "concat", 00:08:10.859 "superblock": true, 00:08:10.859 "num_base_bdevs": 2, 00:08:10.859 "num_base_bdevs_discovered": 2, 00:08:10.859 "num_base_bdevs_operational": 2, 00:08:10.859 "base_bdevs_list": [ 00:08:10.859 { 00:08:10.859 "name": "BaseBdev1", 00:08:10.859 "uuid": "ccae0acb-e5c1-4c4c-8323-90bef6889271", 00:08:10.859 "is_configured": true, 00:08:10.859 "data_offset": 2048, 00:08:10.859 "data_size": 63488 00:08:10.859 }, 00:08:10.859 { 00:08:10.859 "name": "BaseBdev2", 00:08:10.859 "uuid": "4ce76525-c566-4448-86ab-c1ddc5243623", 00:08:10.859 "is_configured": true, 00:08:10.859 "data_offset": 2048, 00:08:10.859 "data_size": 63488 00:08:10.859 } 00:08:10.859 ] 00:08:10.859 }' 00:08:10.859 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.860 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.425 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:11.425 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:11.426 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:11.426 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:11.426 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:11.426 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:11.426 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:11.426 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.426 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.426 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:11.426 [2024-11-26 18:57:37.809389] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.426 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.426 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:11.426 "name": "Existed_Raid", 00:08:11.426 "aliases": [ 00:08:11.426 "3c728881-c4eb-425a-b9a2-3bc1537925ea" 00:08:11.426 ], 00:08:11.426 "product_name": "Raid Volume", 00:08:11.426 "block_size": 512, 00:08:11.426 "num_blocks": 126976, 00:08:11.426 "uuid": "3c728881-c4eb-425a-b9a2-3bc1537925ea", 00:08:11.426 "assigned_rate_limits": { 00:08:11.426 "rw_ios_per_sec": 0, 00:08:11.426 "rw_mbytes_per_sec": 0, 00:08:11.426 "r_mbytes_per_sec": 0, 00:08:11.426 "w_mbytes_per_sec": 0 00:08:11.426 }, 00:08:11.426 "claimed": false, 00:08:11.426 "zoned": false, 00:08:11.426 "supported_io_types": { 00:08:11.426 "read": true, 00:08:11.426 "write": true, 00:08:11.426 "unmap": true, 00:08:11.426 "flush": true, 00:08:11.426 "reset": true, 00:08:11.426 "nvme_admin": false, 00:08:11.426 "nvme_io": false, 00:08:11.426 "nvme_io_md": false, 00:08:11.426 "write_zeroes": true, 00:08:11.426 "zcopy": false, 00:08:11.426 "get_zone_info": false, 00:08:11.426 "zone_management": false, 00:08:11.426 "zone_append": false, 00:08:11.426 "compare": false, 00:08:11.426 "compare_and_write": false, 00:08:11.426 "abort": false, 00:08:11.426 "seek_hole": false, 00:08:11.426 "seek_data": false, 00:08:11.426 "copy": false, 00:08:11.426 "nvme_iov_md": false 00:08:11.426 }, 00:08:11.426 "memory_domains": [ 00:08:11.426 { 00:08:11.426 "dma_device_id": "system", 00:08:11.426 "dma_device_type": 1 00:08:11.426 }, 00:08:11.426 { 00:08:11.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.426 "dma_device_type": 2 00:08:11.426 }, 00:08:11.426 { 00:08:11.426 "dma_device_id": "system", 00:08:11.426 "dma_device_type": 1 00:08:11.426 }, 00:08:11.426 { 00:08:11.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.426 "dma_device_type": 2 00:08:11.426 } 00:08:11.426 ], 00:08:11.426 "driver_specific": { 00:08:11.426 "raid": { 00:08:11.426 "uuid": "3c728881-c4eb-425a-b9a2-3bc1537925ea", 00:08:11.426 "strip_size_kb": 64, 00:08:11.426 "state": "online", 00:08:11.426 "raid_level": "concat", 00:08:11.426 "superblock": true, 00:08:11.426 "num_base_bdevs": 2, 00:08:11.426 "num_base_bdevs_discovered": 2, 00:08:11.426 "num_base_bdevs_operational": 2, 00:08:11.426 "base_bdevs_list": [ 00:08:11.426 { 00:08:11.426 "name": "BaseBdev1", 00:08:11.426 "uuid": "ccae0acb-e5c1-4c4c-8323-90bef6889271", 00:08:11.426 "is_configured": true, 00:08:11.426 "data_offset": 2048, 00:08:11.426 "data_size": 63488 00:08:11.426 }, 00:08:11.426 { 00:08:11.426 "name": "BaseBdev2", 00:08:11.426 "uuid": "4ce76525-c566-4448-86ab-c1ddc5243623", 00:08:11.426 "is_configured": true, 00:08:11.426 "data_offset": 2048, 00:08:11.426 "data_size": 63488 00:08:11.426 } 00:08:11.426 ] 00:08:11.426 } 00:08:11.426 } 00:08:11.426 }' 00:08:11.426 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:11.426 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:11.426 BaseBdev2' 00:08:11.426 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.426 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:11.426 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.426 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:11.426 18:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.426 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.426 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.426 18:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.426 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.426 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.426 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.426 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.426 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:11.426 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.426 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.684 [2024-11-26 18:57:38.072880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:11.684 [2024-11-26 18:57:38.072931] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:11.684 [2024-11-26 18:57:38.073010] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.684 "name": "Existed_Raid", 00:08:11.684 "uuid": "3c728881-c4eb-425a-b9a2-3bc1537925ea", 00:08:11.684 "strip_size_kb": 64, 00:08:11.684 "state": "offline", 00:08:11.684 "raid_level": "concat", 00:08:11.684 "superblock": true, 00:08:11.684 "num_base_bdevs": 2, 00:08:11.684 "num_base_bdevs_discovered": 1, 00:08:11.684 "num_base_bdevs_operational": 1, 00:08:11.684 "base_bdevs_list": [ 00:08:11.684 { 00:08:11.684 "name": null, 00:08:11.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.684 "is_configured": false, 00:08:11.684 "data_offset": 0, 00:08:11.684 "data_size": 63488 00:08:11.684 }, 00:08:11.684 { 00:08:11.684 "name": "BaseBdev2", 00:08:11.684 "uuid": "4ce76525-c566-4448-86ab-c1ddc5243623", 00:08:11.684 "is_configured": true, 00:08:11.684 "data_offset": 2048, 00:08:11.684 "data_size": 63488 00:08:11.684 } 00:08:11.684 ] 00:08:11.684 }' 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.684 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.251 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:12.251 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:12.251 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.251 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.251 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.251 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:12.251 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.251 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:12.251 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:12.251 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:12.251 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.251 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.251 [2024-11-26 18:57:38.733891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:12.251 [2024-11-26 18:57:38.733980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:12.251 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.251 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:12.251 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:12.251 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.251 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:12.251 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.251 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.251 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.509 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:12.509 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:12.509 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:12.509 18:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62229 00:08:12.509 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62229 ']' 00:08:12.509 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62229 00:08:12.509 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:12.509 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.509 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62229 00:08:12.509 killing process with pid 62229 00:08:12.509 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.509 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.509 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62229' 00:08:12.509 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62229 00:08:12.509 [2024-11-26 18:57:38.913544] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:12.509 18:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62229 00:08:12.509 [2024-11-26 18:57:38.928926] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:13.881 ************************************ 00:08:13.881 END TEST raid_state_function_test_sb 00:08:13.881 ************************************ 00:08:13.881 18:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:13.881 00:08:13.881 real 0m5.650s 00:08:13.881 user 0m8.404s 00:08:13.881 sys 0m0.833s 00:08:13.881 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.881 18:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.881 18:57:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:13.881 18:57:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:13.881 18:57:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.881 18:57:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:13.881 ************************************ 00:08:13.881 START TEST raid_superblock_test 00:08:13.881 ************************************ 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:13.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62488 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62488 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62488 ']' 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.881 18:57:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.881 [2024-11-26 18:57:40.252348] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:08:13.881 [2024-11-26 18:57:40.252534] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62488 ] 00:08:13.881 [2024-11-26 18:57:40.444796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.140 [2024-11-26 18:57:40.612339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.399 [2024-11-26 18:57:40.840292] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.399 [2024-11-26 18:57:40.840353] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.657 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.658 malloc1 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.658 [2024-11-26 18:57:41.264005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:14.658 [2024-11-26 18:57:41.264212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.658 [2024-11-26 18:57:41.264308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:14.658 [2024-11-26 18:57:41.264570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.658 [2024-11-26 18:57:41.267544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.658 [2024-11-26 18:57:41.267707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:14.658 pt1 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.658 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.917 malloc2 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.917 [2024-11-26 18:57:41.324835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:14.917 [2024-11-26 18:57:41.324911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.917 [2024-11-26 18:57:41.324951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:14.917 [2024-11-26 18:57:41.324966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.917 [2024-11-26 18:57:41.328004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.917 [2024-11-26 18:57:41.328049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:14.917 pt2 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.917 [2024-11-26 18:57:41.332903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:14.917 [2024-11-26 18:57:41.335616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:14.917 [2024-11-26 18:57:41.335952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:14.917 [2024-11-26 18:57:41.336082] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:14.917 [2024-11-26 18:57:41.336472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:14.917 [2024-11-26 18:57:41.336805] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:14.917 [2024-11-26 18:57:41.336936] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:14.917 [2024-11-26 18:57:41.337352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.917 "name": "raid_bdev1", 00:08:14.917 "uuid": "6ecabc35-e5d9-40c4-a1bc-63a09828184c", 00:08:14.917 "strip_size_kb": 64, 00:08:14.917 "state": "online", 00:08:14.917 "raid_level": "concat", 00:08:14.917 "superblock": true, 00:08:14.917 "num_base_bdevs": 2, 00:08:14.917 "num_base_bdevs_discovered": 2, 00:08:14.917 "num_base_bdevs_operational": 2, 00:08:14.917 "base_bdevs_list": [ 00:08:14.917 { 00:08:14.917 "name": "pt1", 00:08:14.917 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.917 "is_configured": true, 00:08:14.917 "data_offset": 2048, 00:08:14.917 "data_size": 63488 00:08:14.917 }, 00:08:14.917 { 00:08:14.917 "name": "pt2", 00:08:14.917 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.917 "is_configured": true, 00:08:14.917 "data_offset": 2048, 00:08:14.917 "data_size": 63488 00:08:14.917 } 00:08:14.917 ] 00:08:14.917 }' 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.917 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.485 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:15.485 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:15.485 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:15.485 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:15.485 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:15.485 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:15.485 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:15.485 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.485 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.485 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:15.485 [2024-11-26 18:57:41.873817] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.485 18:57:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.485 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:15.485 "name": "raid_bdev1", 00:08:15.485 "aliases": [ 00:08:15.485 "6ecabc35-e5d9-40c4-a1bc-63a09828184c" 00:08:15.485 ], 00:08:15.485 "product_name": "Raid Volume", 00:08:15.485 "block_size": 512, 00:08:15.485 "num_blocks": 126976, 00:08:15.485 "uuid": "6ecabc35-e5d9-40c4-a1bc-63a09828184c", 00:08:15.485 "assigned_rate_limits": { 00:08:15.485 "rw_ios_per_sec": 0, 00:08:15.485 "rw_mbytes_per_sec": 0, 00:08:15.485 "r_mbytes_per_sec": 0, 00:08:15.485 "w_mbytes_per_sec": 0 00:08:15.485 }, 00:08:15.485 "claimed": false, 00:08:15.485 "zoned": false, 00:08:15.485 "supported_io_types": { 00:08:15.485 "read": true, 00:08:15.485 "write": true, 00:08:15.485 "unmap": true, 00:08:15.485 "flush": true, 00:08:15.485 "reset": true, 00:08:15.485 "nvme_admin": false, 00:08:15.485 "nvme_io": false, 00:08:15.485 "nvme_io_md": false, 00:08:15.485 "write_zeroes": true, 00:08:15.485 "zcopy": false, 00:08:15.485 "get_zone_info": false, 00:08:15.485 "zone_management": false, 00:08:15.485 "zone_append": false, 00:08:15.485 "compare": false, 00:08:15.485 "compare_and_write": false, 00:08:15.485 "abort": false, 00:08:15.485 "seek_hole": false, 00:08:15.485 "seek_data": false, 00:08:15.485 "copy": false, 00:08:15.485 "nvme_iov_md": false 00:08:15.485 }, 00:08:15.485 "memory_domains": [ 00:08:15.485 { 00:08:15.485 "dma_device_id": "system", 00:08:15.485 "dma_device_type": 1 00:08:15.485 }, 00:08:15.485 { 00:08:15.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.485 "dma_device_type": 2 00:08:15.485 }, 00:08:15.485 { 00:08:15.485 "dma_device_id": "system", 00:08:15.485 "dma_device_type": 1 00:08:15.485 }, 00:08:15.485 { 00:08:15.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.485 "dma_device_type": 2 00:08:15.485 } 00:08:15.485 ], 00:08:15.485 "driver_specific": { 00:08:15.485 "raid": { 00:08:15.485 "uuid": "6ecabc35-e5d9-40c4-a1bc-63a09828184c", 00:08:15.485 "strip_size_kb": 64, 00:08:15.485 "state": "online", 00:08:15.485 "raid_level": "concat", 00:08:15.485 "superblock": true, 00:08:15.485 "num_base_bdevs": 2, 00:08:15.485 "num_base_bdevs_discovered": 2, 00:08:15.485 "num_base_bdevs_operational": 2, 00:08:15.485 "base_bdevs_list": [ 00:08:15.485 { 00:08:15.485 "name": "pt1", 00:08:15.485 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.485 "is_configured": true, 00:08:15.485 "data_offset": 2048, 00:08:15.485 "data_size": 63488 00:08:15.485 }, 00:08:15.485 { 00:08:15.485 "name": "pt2", 00:08:15.485 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.485 "is_configured": true, 00:08:15.485 "data_offset": 2048, 00:08:15.485 "data_size": 63488 00:08:15.485 } 00:08:15.485 ] 00:08:15.485 } 00:08:15.485 } 00:08:15.485 }' 00:08:15.486 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:15.486 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:15.486 pt2' 00:08:15.486 18:57:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.486 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:15.486 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.486 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:15.486 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.486 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.486 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.486 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.486 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.486 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.486 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.486 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:15.486 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.486 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.486 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.745 [2024-11-26 18:57:42.141849] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6ecabc35-e5d9-40c4-a1bc-63a09828184c 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6ecabc35-e5d9-40c4-a1bc-63a09828184c ']' 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.745 [2024-11-26 18:57:42.189494] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.745 [2024-11-26 18:57:42.189536] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:15.745 [2024-11-26 18:57:42.189665] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.745 [2024-11-26 18:57:42.189754] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.745 [2024-11-26 18:57:42.189775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.745 [2024-11-26 18:57:42.345550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:15.745 [2024-11-26 18:57:42.348154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:15.745 [2024-11-26 18:57:42.348420] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:15.745 [2024-11-26 18:57:42.348506] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:15.745 [2024-11-26 18:57:42.348533] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.745 [2024-11-26 18:57:42.348549] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:15.745 request: 00:08:15.745 { 00:08:15.745 "name": "raid_bdev1", 00:08:15.745 "raid_level": "concat", 00:08:15.745 "base_bdevs": [ 00:08:15.745 "malloc1", 00:08:15.745 "malloc2" 00:08:15.745 ], 00:08:15.745 "strip_size_kb": 64, 00:08:15.745 "superblock": false, 00:08:15.745 "method": "bdev_raid_create", 00:08:15.745 "req_id": 1 00:08:15.745 } 00:08:15.745 Got JSON-RPC error response 00:08:15.745 response: 00:08:15.745 { 00:08:15.745 "code": -17, 00:08:15.745 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:15.745 } 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.745 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.004 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.004 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:16.004 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:16.004 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:16.004 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.004 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.004 [2024-11-26 18:57:42.405542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:16.004 [2024-11-26 18:57:42.405740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.004 [2024-11-26 18:57:42.405777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:16.004 [2024-11-26 18:57:42.405797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.004 [2024-11-26 18:57:42.408845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.004 [2024-11-26 18:57:42.409004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:16.004 [2024-11-26 18:57:42.409128] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:16.004 [2024-11-26 18:57:42.409229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:16.004 pt1 00:08:16.005 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.005 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:16.005 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:16.005 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.005 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.005 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.005 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.005 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.005 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.005 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.005 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.005 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.005 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.005 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.005 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.005 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.005 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.005 "name": "raid_bdev1", 00:08:16.005 "uuid": "6ecabc35-e5d9-40c4-a1bc-63a09828184c", 00:08:16.005 "strip_size_kb": 64, 00:08:16.005 "state": "configuring", 00:08:16.005 "raid_level": "concat", 00:08:16.005 "superblock": true, 00:08:16.005 "num_base_bdevs": 2, 00:08:16.005 "num_base_bdevs_discovered": 1, 00:08:16.005 "num_base_bdevs_operational": 2, 00:08:16.005 "base_bdevs_list": [ 00:08:16.005 { 00:08:16.005 "name": "pt1", 00:08:16.005 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:16.005 "is_configured": true, 00:08:16.005 "data_offset": 2048, 00:08:16.005 "data_size": 63488 00:08:16.005 }, 00:08:16.005 { 00:08:16.005 "name": null, 00:08:16.005 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:16.005 "is_configured": false, 00:08:16.005 "data_offset": 2048, 00:08:16.005 "data_size": 63488 00:08:16.005 } 00:08:16.005 ] 00:08:16.005 }' 00:08:16.005 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.005 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.572 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:16.572 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:16.572 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:16.572 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:16.572 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.572 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.572 [2024-11-26 18:57:42.913721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:16.572 [2024-11-26 18:57:42.913826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.572 [2024-11-26 18:57:42.913863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:16.572 [2024-11-26 18:57:42.913882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.572 [2024-11-26 18:57:42.914531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.572 [2024-11-26 18:57:42.914570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:16.572 [2024-11-26 18:57:42.914685] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:16.572 [2024-11-26 18:57:42.914728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:16.572 [2024-11-26 18:57:42.914880] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:16.572 [2024-11-26 18:57:42.914901] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:16.573 [2024-11-26 18:57:42.915220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:16.573 [2024-11-26 18:57:42.915423] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:16.573 [2024-11-26 18:57:42.915439] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:16.573 [2024-11-26 18:57:42.915617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.573 pt2 00:08:16.573 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.573 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:16.573 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:16.573 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:16.573 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:16.573 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.573 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.573 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.573 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.573 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.573 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.573 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.573 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.573 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.573 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.573 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.573 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.573 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.573 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.573 "name": "raid_bdev1", 00:08:16.573 "uuid": "6ecabc35-e5d9-40c4-a1bc-63a09828184c", 00:08:16.573 "strip_size_kb": 64, 00:08:16.573 "state": "online", 00:08:16.573 "raid_level": "concat", 00:08:16.573 "superblock": true, 00:08:16.573 "num_base_bdevs": 2, 00:08:16.573 "num_base_bdevs_discovered": 2, 00:08:16.573 "num_base_bdevs_operational": 2, 00:08:16.573 "base_bdevs_list": [ 00:08:16.573 { 00:08:16.573 "name": "pt1", 00:08:16.573 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:16.573 "is_configured": true, 00:08:16.573 "data_offset": 2048, 00:08:16.573 "data_size": 63488 00:08:16.573 }, 00:08:16.573 { 00:08:16.573 "name": "pt2", 00:08:16.573 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:16.573 "is_configured": true, 00:08:16.573 "data_offset": 2048, 00:08:16.573 "data_size": 63488 00:08:16.573 } 00:08:16.573 ] 00:08:16.573 }' 00:08:16.573 18:57:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.573 18:57:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:17.230 [2024-11-26 18:57:43.478149] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:17.230 "name": "raid_bdev1", 00:08:17.230 "aliases": [ 00:08:17.230 "6ecabc35-e5d9-40c4-a1bc-63a09828184c" 00:08:17.230 ], 00:08:17.230 "product_name": "Raid Volume", 00:08:17.230 "block_size": 512, 00:08:17.230 "num_blocks": 126976, 00:08:17.230 "uuid": "6ecabc35-e5d9-40c4-a1bc-63a09828184c", 00:08:17.230 "assigned_rate_limits": { 00:08:17.230 "rw_ios_per_sec": 0, 00:08:17.230 "rw_mbytes_per_sec": 0, 00:08:17.230 "r_mbytes_per_sec": 0, 00:08:17.230 "w_mbytes_per_sec": 0 00:08:17.230 }, 00:08:17.230 "claimed": false, 00:08:17.230 "zoned": false, 00:08:17.230 "supported_io_types": { 00:08:17.230 "read": true, 00:08:17.230 "write": true, 00:08:17.230 "unmap": true, 00:08:17.230 "flush": true, 00:08:17.230 "reset": true, 00:08:17.230 "nvme_admin": false, 00:08:17.230 "nvme_io": false, 00:08:17.230 "nvme_io_md": false, 00:08:17.230 "write_zeroes": true, 00:08:17.230 "zcopy": false, 00:08:17.230 "get_zone_info": false, 00:08:17.230 "zone_management": false, 00:08:17.230 "zone_append": false, 00:08:17.230 "compare": false, 00:08:17.230 "compare_and_write": false, 00:08:17.230 "abort": false, 00:08:17.230 "seek_hole": false, 00:08:17.230 "seek_data": false, 00:08:17.230 "copy": false, 00:08:17.230 "nvme_iov_md": false 00:08:17.230 }, 00:08:17.230 "memory_domains": [ 00:08:17.230 { 00:08:17.230 "dma_device_id": "system", 00:08:17.230 "dma_device_type": 1 00:08:17.230 }, 00:08:17.230 { 00:08:17.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.230 "dma_device_type": 2 00:08:17.230 }, 00:08:17.230 { 00:08:17.230 "dma_device_id": "system", 00:08:17.230 "dma_device_type": 1 00:08:17.230 }, 00:08:17.230 { 00:08:17.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.230 "dma_device_type": 2 00:08:17.230 } 00:08:17.230 ], 00:08:17.230 "driver_specific": { 00:08:17.230 "raid": { 00:08:17.230 "uuid": "6ecabc35-e5d9-40c4-a1bc-63a09828184c", 00:08:17.230 "strip_size_kb": 64, 00:08:17.230 "state": "online", 00:08:17.230 "raid_level": "concat", 00:08:17.230 "superblock": true, 00:08:17.230 "num_base_bdevs": 2, 00:08:17.230 "num_base_bdevs_discovered": 2, 00:08:17.230 "num_base_bdevs_operational": 2, 00:08:17.230 "base_bdevs_list": [ 00:08:17.230 { 00:08:17.230 "name": "pt1", 00:08:17.230 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:17.230 "is_configured": true, 00:08:17.230 "data_offset": 2048, 00:08:17.230 "data_size": 63488 00:08:17.230 }, 00:08:17.230 { 00:08:17.230 "name": "pt2", 00:08:17.230 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:17.230 "is_configured": true, 00:08:17.230 "data_offset": 2048, 00:08:17.230 "data_size": 63488 00:08:17.230 } 00:08:17.230 ] 00:08:17.230 } 00:08:17.230 } 00:08:17.230 }' 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:17.230 pt2' 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.230 [2024-11-26 18:57:43.722195] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6ecabc35-e5d9-40c4-a1bc-63a09828184c '!=' 6ecabc35-e5d9-40c4-a1bc-63a09828184c ']' 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62488 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62488 ']' 00:08:17.230 18:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62488 00:08:17.231 18:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:17.231 18:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.231 18:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62488 00:08:17.231 killing process with pid 62488 00:08:17.231 18:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.231 18:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.231 18:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62488' 00:08:17.231 18:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62488 00:08:17.231 [2024-11-26 18:57:43.801720] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:17.231 18:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62488 00:08:17.231 [2024-11-26 18:57:43.801864] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.231 [2024-11-26 18:57:43.801941] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.231 [2024-11-26 18:57:43.801962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:17.489 [2024-11-26 18:57:43.999880] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:18.865 ************************************ 00:08:18.865 END TEST raid_superblock_test 00:08:18.865 ************************************ 00:08:18.865 18:57:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:18.865 00:08:18.865 real 0m5.051s 00:08:18.865 user 0m7.258s 00:08:18.865 sys 0m0.820s 00:08:18.865 18:57:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.865 18:57:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.865 18:57:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:18.865 18:57:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:18.865 18:57:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.865 18:57:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:18.865 ************************************ 00:08:18.865 START TEST raid_read_error_test 00:08:18.865 ************************************ 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RC6831Ab8c 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62704 00:08:18.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62704 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62704 ']' 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.865 18:57:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.865 [2024-11-26 18:57:45.352933] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:08:18.865 [2024-11-26 18:57:45.353100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62704 ] 00:08:19.124 [2024-11-26 18:57:45.531451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.124 [2024-11-26 18:57:45.713027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.383 [2024-11-26 18:57:45.946846] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.383 [2024-11-26 18:57:45.946934] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.952 BaseBdev1_malloc 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.952 true 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.952 [2024-11-26 18:57:46.485642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:19.952 [2024-11-26 18:57:46.485884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.952 [2024-11-26 18:57:46.485930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:19.952 [2024-11-26 18:57:46.485952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.952 [2024-11-26 18:57:46.488943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.952 [2024-11-26 18:57:46.489118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:19.952 BaseBdev1 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.952 BaseBdev2_malloc 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.952 true 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.952 [2024-11-26 18:57:46.559133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:19.952 [2024-11-26 18:57:46.559213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.952 [2024-11-26 18:57:46.559244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:19.952 [2024-11-26 18:57:46.559262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.952 [2024-11-26 18:57:46.562300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.952 [2024-11-26 18:57:46.562352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:19.952 BaseBdev2 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.952 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.213 [2024-11-26 18:57:46.571231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:20.213 [2024-11-26 18:57:46.574062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:20.213 [2024-11-26 18:57:46.574371] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:20.213 [2024-11-26 18:57:46.574398] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:20.213 [2024-11-26 18:57:46.574746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:20.213 [2024-11-26 18:57:46.575139] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:20.213 [2024-11-26 18:57:46.575171] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:20.213 [2024-11-26 18:57:46.575456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.213 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.213 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:20.213 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.213 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.213 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:20.213 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.213 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:20.213 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.213 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.213 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.213 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.213 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.213 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.213 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.213 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.213 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.213 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.213 "name": "raid_bdev1", 00:08:20.213 "uuid": "ee60da80-29a2-4b7e-b5b2-1db3769d3ea9", 00:08:20.213 "strip_size_kb": 64, 00:08:20.213 "state": "online", 00:08:20.213 "raid_level": "concat", 00:08:20.213 "superblock": true, 00:08:20.213 "num_base_bdevs": 2, 00:08:20.213 "num_base_bdevs_discovered": 2, 00:08:20.213 "num_base_bdevs_operational": 2, 00:08:20.213 "base_bdevs_list": [ 00:08:20.213 { 00:08:20.213 "name": "BaseBdev1", 00:08:20.213 "uuid": "03d7d6be-f411-50b5-8d22-6ad013ab94e5", 00:08:20.213 "is_configured": true, 00:08:20.213 "data_offset": 2048, 00:08:20.213 "data_size": 63488 00:08:20.213 }, 00:08:20.213 { 00:08:20.213 "name": "BaseBdev2", 00:08:20.213 "uuid": "4b9f018b-02ad-561b-ae67-1279f775e19d", 00:08:20.213 "is_configured": true, 00:08:20.213 "data_offset": 2048, 00:08:20.213 "data_size": 63488 00:08:20.213 } 00:08:20.213 ] 00:08:20.213 }' 00:08:20.213 18:57:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.213 18:57:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.472 18:57:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:20.472 18:57:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:20.787 [2024-11-26 18:57:47.233106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.722 "name": "raid_bdev1", 00:08:21.722 "uuid": "ee60da80-29a2-4b7e-b5b2-1db3769d3ea9", 00:08:21.722 "strip_size_kb": 64, 00:08:21.722 "state": "online", 00:08:21.722 "raid_level": "concat", 00:08:21.722 "superblock": true, 00:08:21.722 "num_base_bdevs": 2, 00:08:21.722 "num_base_bdevs_discovered": 2, 00:08:21.722 "num_base_bdevs_operational": 2, 00:08:21.722 "base_bdevs_list": [ 00:08:21.722 { 00:08:21.722 "name": "BaseBdev1", 00:08:21.722 "uuid": "03d7d6be-f411-50b5-8d22-6ad013ab94e5", 00:08:21.722 "is_configured": true, 00:08:21.722 "data_offset": 2048, 00:08:21.722 "data_size": 63488 00:08:21.722 }, 00:08:21.722 { 00:08:21.722 "name": "BaseBdev2", 00:08:21.722 "uuid": "4b9f018b-02ad-561b-ae67-1279f775e19d", 00:08:21.722 "is_configured": true, 00:08:21.722 "data_offset": 2048, 00:08:21.722 "data_size": 63488 00:08:21.722 } 00:08:21.722 ] 00:08:21.722 }' 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.722 18:57:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.289 18:57:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:22.289 18:57:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.289 18:57:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.289 [2024-11-26 18:57:48.623957] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:22.289 [2024-11-26 18:57:48.624217] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.289 [2024-11-26 18:57:48.629270] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.289 [2024-11-26 18:57:48.629697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.289 { 00:08:22.289 "results": [ 00:08:22.289 { 00:08:22.289 "job": "raid_bdev1", 00:08:22.289 "core_mask": "0x1", 00:08:22.289 "workload": "randrw", 00:08:22.289 "percentage": 50, 00:08:22.289 "status": "finished", 00:08:22.289 "queue_depth": 1, 00:08:22.289 "io_size": 131072, 00:08:22.289 "runtime": 1.388855, 00:08:22.289 "iops": 9443.750427510431, 00:08:22.289 "mibps": 1180.468803438804, 00:08:22.289 "io_failed": 1, 00:08:22.289 "io_timeout": 0, 00:08:22.289 "avg_latency_us": 148.65477167035147, 00:08:22.289 "min_latency_us": 43.52, 00:08:22.289 "max_latency_us": 1869.2654545454545 00:08:22.289 } 00:08:22.289 ], 00:08:22.289 "core_count": 1 00:08:22.289 } 00:08:22.289 [2024-11-26 18:57:48.629974] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.289 [2024-11-26 18:57:48.630036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:22.289 18:57:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.289 18:57:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62704 00:08:22.289 18:57:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62704 ']' 00:08:22.289 18:57:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62704 00:08:22.289 18:57:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:22.289 18:57:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:22.289 18:57:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62704 00:08:22.290 18:57:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:22.290 18:57:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:22.290 18:57:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62704' 00:08:22.290 killing process with pid 62704 00:08:22.290 18:57:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62704 00:08:22.290 18:57:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62704 00:08:22.290 [2024-11-26 18:57:48.677045] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:22.290 [2024-11-26 18:57:48.861330] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:23.663 18:57:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RC6831Ab8c 00:08:23.663 18:57:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:23.663 18:57:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:23.663 18:57:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:23.663 18:57:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:23.663 18:57:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:23.663 18:57:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:23.663 18:57:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:23.663 00:08:23.663 real 0m5.025s 00:08:23.663 user 0m6.161s 00:08:23.663 sys 0m0.665s 00:08:23.663 18:57:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.663 18:57:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.663 ************************************ 00:08:23.663 END TEST raid_read_error_test 00:08:23.663 ************************************ 00:08:23.921 18:57:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:23.921 18:57:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:23.921 18:57:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.921 18:57:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:23.921 ************************************ 00:08:23.921 START TEST raid_write_error_test 00:08:23.921 ************************************ 00:08:23.921 18:57:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:08:23.921 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:23.921 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:23.921 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:23.921 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:23.921 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:23.921 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:23.921 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:23.921 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:23.921 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:23.921 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:23.921 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:23.921 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:23.921 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:23.921 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:23.922 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:23.922 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:23.922 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:23.922 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:23.922 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:23.922 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:23.922 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:23.922 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:23.922 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xpwp9OMeyC 00:08:23.922 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62855 00:08:23.922 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62855 00:08:23.922 18:57:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:23.922 18:57:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62855 ']' 00:08:23.922 18:57:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.922 18:57:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.922 18:57:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.922 18:57:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.922 18:57:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.922 [2024-11-26 18:57:50.458359] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:08:23.922 [2024-11-26 18:57:50.458861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62855 ] 00:08:24.179 [2024-11-26 18:57:50.651578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.438 [2024-11-26 18:57:50.803922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.438 [2024-11-26 18:57:51.034957] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.438 [2024-11-26 18:57:51.035013] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.004 BaseBdev1_malloc 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.004 true 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.004 [2024-11-26 18:57:51.527153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:25.004 [2024-11-26 18:57:51.527252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.004 [2024-11-26 18:57:51.527312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:25.004 [2024-11-26 18:57:51.527336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.004 [2024-11-26 18:57:51.530862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.004 [2024-11-26 18:57:51.530918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:25.004 BaseBdev1 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.004 BaseBdev2_malloc 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.004 true 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.004 [2024-11-26 18:57:51.592836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:25.004 [2024-11-26 18:57:51.592925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.004 [2024-11-26 18:57:51.592970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:25.004 [2024-11-26 18:57:51.592988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.004 [2024-11-26 18:57:51.596029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.004 [2024-11-26 18:57:51.596079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:25.004 BaseBdev2 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.004 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.005 [2024-11-26 18:57:51.600947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.005 [2024-11-26 18:57:51.603590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.005 [2024-11-26 18:57:51.603864] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:25.005 [2024-11-26 18:57:51.603891] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:25.005 [2024-11-26 18:57:51.604197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:25.005 [2024-11-26 18:57:51.604462] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:25.005 [2024-11-26 18:57:51.604495] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:25.005 [2024-11-26 18:57:51.604692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.005 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.005 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:25.005 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:25.005 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.005 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:25.005 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.005 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.005 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.005 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.005 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.005 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.005 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.005 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.005 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.005 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.005 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.264 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.264 "name": "raid_bdev1", 00:08:25.264 "uuid": "2a79fdf1-b770-41f2-816c-46d5669d57e2", 00:08:25.264 "strip_size_kb": 64, 00:08:25.264 "state": "online", 00:08:25.264 "raid_level": "concat", 00:08:25.264 "superblock": true, 00:08:25.264 "num_base_bdevs": 2, 00:08:25.264 "num_base_bdevs_discovered": 2, 00:08:25.264 "num_base_bdevs_operational": 2, 00:08:25.264 "base_bdevs_list": [ 00:08:25.264 { 00:08:25.264 "name": "BaseBdev1", 00:08:25.264 "uuid": "b9688085-f608-558c-92bc-d877bf6cad11", 00:08:25.264 "is_configured": true, 00:08:25.264 "data_offset": 2048, 00:08:25.264 "data_size": 63488 00:08:25.264 }, 00:08:25.264 { 00:08:25.264 "name": "BaseBdev2", 00:08:25.264 "uuid": "cdb88534-7eda-511d-9358-237ea5ef5d51", 00:08:25.264 "is_configured": true, 00:08:25.264 "data_offset": 2048, 00:08:25.264 "data_size": 63488 00:08:25.264 } 00:08:25.264 ] 00:08:25.264 }' 00:08:25.264 18:57:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.264 18:57:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.830 18:57:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:25.830 18:57:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:25.830 [2024-11-26 18:57:52.286612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.764 18:57:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.764 "name": "raid_bdev1", 00:08:26.764 "uuid": "2a79fdf1-b770-41f2-816c-46d5669d57e2", 00:08:26.764 "strip_size_kb": 64, 00:08:26.764 "state": "online", 00:08:26.764 "raid_level": "concat", 00:08:26.764 "superblock": true, 00:08:26.764 "num_base_bdevs": 2, 00:08:26.764 "num_base_bdevs_discovered": 2, 00:08:26.764 "num_base_bdevs_operational": 2, 00:08:26.764 "base_bdevs_list": [ 00:08:26.764 { 00:08:26.764 "name": "BaseBdev1", 00:08:26.764 "uuid": "b9688085-f608-558c-92bc-d877bf6cad11", 00:08:26.764 "is_configured": true, 00:08:26.764 "data_offset": 2048, 00:08:26.764 "data_size": 63488 00:08:26.764 }, 00:08:26.764 { 00:08:26.764 "name": "BaseBdev2", 00:08:26.764 "uuid": "cdb88534-7eda-511d-9358-237ea5ef5d51", 00:08:26.764 "is_configured": true, 00:08:26.764 "data_offset": 2048, 00:08:26.764 "data_size": 63488 00:08:26.765 } 00:08:26.765 ] 00:08:26.765 }' 00:08:26.765 18:57:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.765 18:57:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.331 18:57:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:27.331 18:57:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.331 18:57:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.331 [2024-11-26 18:57:53.681062] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:27.331 [2024-11-26 18:57:53.681108] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.331 [2024-11-26 18:57:53.684662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.331 [2024-11-26 18:57:53.684890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.331 [2024-11-26 18:57:53.684957] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.331 [2024-11-26 18:57:53.684980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:27.331 { 00:08:27.331 "results": [ 00:08:27.331 { 00:08:27.331 "job": "raid_bdev1", 00:08:27.331 "core_mask": "0x1", 00:08:27.331 "workload": "randrw", 00:08:27.331 "percentage": 50, 00:08:27.331 "status": "finished", 00:08:27.331 "queue_depth": 1, 00:08:27.331 "io_size": 131072, 00:08:27.331 "runtime": 1.392053, 00:08:27.331 "iops": 9341.598344315913, 00:08:27.331 "mibps": 1167.6997930394891, 00:08:27.331 "io_failed": 1, 00:08:27.331 "io_timeout": 0, 00:08:27.331 "avg_latency_us": 149.68042892593758, 00:08:27.331 "min_latency_us": 45.14909090909091, 00:08:27.331 "max_latency_us": 1876.7127272727273 00:08:27.331 } 00:08:27.331 ], 00:08:27.331 "core_count": 1 00:08:27.331 } 00:08:27.331 18:57:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.331 18:57:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62855 00:08:27.331 18:57:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62855 ']' 00:08:27.331 18:57:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62855 00:08:27.331 18:57:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:27.331 18:57:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.331 18:57:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62855 00:08:27.331 killing process with pid 62855 00:08:27.331 18:57:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.331 18:57:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.331 18:57:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62855' 00:08:27.331 18:57:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62855 00:08:27.331 18:57:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62855 00:08:27.331 [2024-11-26 18:57:53.722168] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:27.331 [2024-11-26 18:57:53.857211] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:28.708 18:57:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:28.708 18:57:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xpwp9OMeyC 00:08:28.708 18:57:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:28.708 18:57:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:28.708 18:57:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:28.708 18:57:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:28.708 18:57:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:28.708 ************************************ 00:08:28.708 END TEST raid_write_error_test 00:08:28.708 ************************************ 00:08:28.708 18:57:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:28.708 00:08:28.708 real 0m4.773s 00:08:28.708 user 0m5.884s 00:08:28.708 sys 0m0.647s 00:08:28.708 18:57:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.708 18:57:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.708 18:57:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:28.708 18:57:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:28.708 18:57:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:28.708 18:57:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.708 18:57:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:28.708 ************************************ 00:08:28.708 START TEST raid_state_function_test 00:08:28.708 ************************************ 00:08:28.708 18:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:28.708 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:28.708 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:28.708 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:28.708 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:28.708 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:28.708 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.708 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:28.708 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:28.708 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.708 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:28.708 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:28.708 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.708 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:28.708 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:28.708 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:28.708 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:28.708 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:28.709 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:28.709 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:28.709 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:28.709 Process raid pid: 62997 00:08:28.709 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:28.709 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:28.709 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62997 00:08:28.709 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62997' 00:08:28.709 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:28.709 18:57:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62997 00:08:28.709 18:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62997 ']' 00:08:28.709 18:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.709 18:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.709 18:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.709 18:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.709 18:57:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.709 [2024-11-26 18:57:55.266921] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:08:28.709 [2024-11-26 18:57:55.268256] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.966 [2024-11-26 18:57:55.467153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.225 [2024-11-26 18:57:55.628037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.483 [2024-11-26 18:57:55.865836] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.483 [2024-11-26 18:57:55.866130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.742 [2024-11-26 18:57:56.255978] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:29.742 [2024-11-26 18:57:56.256055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:29.742 [2024-11-26 18:57:56.256077] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:29.742 [2024-11-26 18:57:56.256098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.742 "name": "Existed_Raid", 00:08:29.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.742 "strip_size_kb": 0, 00:08:29.742 "state": "configuring", 00:08:29.742 "raid_level": "raid1", 00:08:29.742 "superblock": false, 00:08:29.742 "num_base_bdevs": 2, 00:08:29.742 "num_base_bdevs_discovered": 0, 00:08:29.742 "num_base_bdevs_operational": 2, 00:08:29.742 "base_bdevs_list": [ 00:08:29.742 { 00:08:29.742 "name": "BaseBdev1", 00:08:29.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.742 "is_configured": false, 00:08:29.742 "data_offset": 0, 00:08:29.742 "data_size": 0 00:08:29.742 }, 00:08:29.742 { 00:08:29.742 "name": "BaseBdev2", 00:08:29.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.742 "is_configured": false, 00:08:29.742 "data_offset": 0, 00:08:29.742 "data_size": 0 00:08:29.742 } 00:08:29.742 ] 00:08:29.742 }' 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.742 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.376 [2024-11-26 18:57:56.752138] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:30.376 [2024-11-26 18:57:56.752366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.376 [2024-11-26 18:57:56.764075] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:30.376 [2024-11-26 18:57:56.764300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:30.376 [2024-11-26 18:57:56.764451] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:30.376 [2024-11-26 18:57:56.764527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.376 [2024-11-26 18:57:56.815858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.376 BaseBdev1 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.376 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.376 [ 00:08:30.376 { 00:08:30.376 "name": "BaseBdev1", 00:08:30.376 "aliases": [ 00:08:30.376 "bc8f2c96-7115-4f39-adf3-e898b779a76c" 00:08:30.376 ], 00:08:30.376 "product_name": "Malloc disk", 00:08:30.376 "block_size": 512, 00:08:30.376 "num_blocks": 65536, 00:08:30.376 "uuid": "bc8f2c96-7115-4f39-adf3-e898b779a76c", 00:08:30.376 "assigned_rate_limits": { 00:08:30.376 "rw_ios_per_sec": 0, 00:08:30.376 "rw_mbytes_per_sec": 0, 00:08:30.376 "r_mbytes_per_sec": 0, 00:08:30.377 "w_mbytes_per_sec": 0 00:08:30.377 }, 00:08:30.377 "claimed": true, 00:08:30.377 "claim_type": "exclusive_write", 00:08:30.377 "zoned": false, 00:08:30.377 "supported_io_types": { 00:08:30.377 "read": true, 00:08:30.377 "write": true, 00:08:30.377 "unmap": true, 00:08:30.377 "flush": true, 00:08:30.377 "reset": true, 00:08:30.377 "nvme_admin": false, 00:08:30.377 "nvme_io": false, 00:08:30.377 "nvme_io_md": false, 00:08:30.377 "write_zeroes": true, 00:08:30.377 "zcopy": true, 00:08:30.377 "get_zone_info": false, 00:08:30.377 "zone_management": false, 00:08:30.377 "zone_append": false, 00:08:30.377 "compare": false, 00:08:30.377 "compare_and_write": false, 00:08:30.377 "abort": true, 00:08:30.377 "seek_hole": false, 00:08:30.377 "seek_data": false, 00:08:30.377 "copy": true, 00:08:30.377 "nvme_iov_md": false 00:08:30.377 }, 00:08:30.377 "memory_domains": [ 00:08:30.377 { 00:08:30.377 "dma_device_id": "system", 00:08:30.377 "dma_device_type": 1 00:08:30.377 }, 00:08:30.377 { 00:08:30.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.377 "dma_device_type": 2 00:08:30.377 } 00:08:30.377 ], 00:08:30.377 "driver_specific": {} 00:08:30.377 } 00:08:30.377 ] 00:08:30.377 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.377 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:30.377 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:30.377 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.377 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.377 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:30.377 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:30.377 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:30.377 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.377 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.377 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.377 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.377 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.377 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.377 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.377 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.377 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.377 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.377 "name": "Existed_Raid", 00:08:30.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.377 "strip_size_kb": 0, 00:08:30.377 "state": "configuring", 00:08:30.377 "raid_level": "raid1", 00:08:30.377 "superblock": false, 00:08:30.377 "num_base_bdevs": 2, 00:08:30.377 "num_base_bdevs_discovered": 1, 00:08:30.377 "num_base_bdevs_operational": 2, 00:08:30.377 "base_bdevs_list": [ 00:08:30.377 { 00:08:30.377 "name": "BaseBdev1", 00:08:30.377 "uuid": "bc8f2c96-7115-4f39-adf3-e898b779a76c", 00:08:30.377 "is_configured": true, 00:08:30.377 "data_offset": 0, 00:08:30.377 "data_size": 65536 00:08:30.377 }, 00:08:30.377 { 00:08:30.377 "name": "BaseBdev2", 00:08:30.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.377 "is_configured": false, 00:08:30.377 "data_offset": 0, 00:08:30.377 "data_size": 0 00:08:30.377 } 00:08:30.377 ] 00:08:30.377 }' 00:08:30.377 18:57:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.377 18:57:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.943 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:30.943 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.943 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.944 [2024-11-26 18:57:57.336035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:30.944 [2024-11-26 18:57:57.336113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.944 [2024-11-26 18:57:57.344079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.944 [2024-11-26 18:57:57.346867] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:30.944 [2024-11-26 18:57:57.346928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.944 "name": "Existed_Raid", 00:08:30.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.944 "strip_size_kb": 0, 00:08:30.944 "state": "configuring", 00:08:30.944 "raid_level": "raid1", 00:08:30.944 "superblock": false, 00:08:30.944 "num_base_bdevs": 2, 00:08:30.944 "num_base_bdevs_discovered": 1, 00:08:30.944 "num_base_bdevs_operational": 2, 00:08:30.944 "base_bdevs_list": [ 00:08:30.944 { 00:08:30.944 "name": "BaseBdev1", 00:08:30.944 "uuid": "bc8f2c96-7115-4f39-adf3-e898b779a76c", 00:08:30.944 "is_configured": true, 00:08:30.944 "data_offset": 0, 00:08:30.944 "data_size": 65536 00:08:30.944 }, 00:08:30.944 { 00:08:30.944 "name": "BaseBdev2", 00:08:30.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.944 "is_configured": false, 00:08:30.944 "data_offset": 0, 00:08:30.944 "data_size": 0 00:08:30.944 } 00:08:30.944 ] 00:08:30.944 }' 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.944 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.510 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:31.510 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.510 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.510 [2024-11-26 18:57:57.920780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:31.510 [2024-11-26 18:57:57.920875] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:31.510 [2024-11-26 18:57:57.920891] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:31.510 [2024-11-26 18:57:57.921266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:31.510 [2024-11-26 18:57:57.921582] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:31.510 [2024-11-26 18:57:57.921610] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:31.510 [2024-11-26 18:57:57.922005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.510 BaseBdev2 00:08:31.510 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.510 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:31.510 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:31.510 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:31.510 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:31.510 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:31.510 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:31.510 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:31.510 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.510 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.510 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.511 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:31.511 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.511 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.511 [ 00:08:31.511 { 00:08:31.511 "name": "BaseBdev2", 00:08:31.511 "aliases": [ 00:08:31.511 "71c1975e-986b-402f-a0e6-f877c4d34a5f" 00:08:31.511 ], 00:08:31.511 "product_name": "Malloc disk", 00:08:31.511 "block_size": 512, 00:08:31.511 "num_blocks": 65536, 00:08:31.511 "uuid": "71c1975e-986b-402f-a0e6-f877c4d34a5f", 00:08:31.511 "assigned_rate_limits": { 00:08:31.511 "rw_ios_per_sec": 0, 00:08:31.511 "rw_mbytes_per_sec": 0, 00:08:31.511 "r_mbytes_per_sec": 0, 00:08:31.511 "w_mbytes_per_sec": 0 00:08:31.511 }, 00:08:31.511 "claimed": true, 00:08:31.511 "claim_type": "exclusive_write", 00:08:31.511 "zoned": false, 00:08:31.511 "supported_io_types": { 00:08:31.511 "read": true, 00:08:31.511 "write": true, 00:08:31.511 "unmap": true, 00:08:31.511 "flush": true, 00:08:31.511 "reset": true, 00:08:31.511 "nvme_admin": false, 00:08:31.511 "nvme_io": false, 00:08:31.511 "nvme_io_md": false, 00:08:31.511 "write_zeroes": true, 00:08:31.511 "zcopy": true, 00:08:31.511 "get_zone_info": false, 00:08:31.511 "zone_management": false, 00:08:31.511 "zone_append": false, 00:08:31.511 "compare": false, 00:08:31.511 "compare_and_write": false, 00:08:31.511 "abort": true, 00:08:31.511 "seek_hole": false, 00:08:31.511 "seek_data": false, 00:08:31.511 "copy": true, 00:08:31.511 "nvme_iov_md": false 00:08:31.511 }, 00:08:31.511 "memory_domains": [ 00:08:31.511 { 00:08:31.511 "dma_device_id": "system", 00:08:31.511 "dma_device_type": 1 00:08:31.511 }, 00:08:31.511 { 00:08:31.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.511 "dma_device_type": 2 00:08:31.511 } 00:08:31.511 ], 00:08:31.511 "driver_specific": {} 00:08:31.511 } 00:08:31.511 ] 00:08:31.511 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.511 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:31.511 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:31.511 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:31.511 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:31.511 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.511 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.511 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:31.511 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:31.511 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:31.511 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.511 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.511 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.511 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.511 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.511 18:57:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.511 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.511 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.511 18:57:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.511 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.511 "name": "Existed_Raid", 00:08:31.511 "uuid": "100b52dd-55c1-4c0f-bc11-57281984b383", 00:08:31.511 "strip_size_kb": 0, 00:08:31.511 "state": "online", 00:08:31.511 "raid_level": "raid1", 00:08:31.511 "superblock": false, 00:08:31.511 "num_base_bdevs": 2, 00:08:31.511 "num_base_bdevs_discovered": 2, 00:08:31.511 "num_base_bdevs_operational": 2, 00:08:31.511 "base_bdevs_list": [ 00:08:31.511 { 00:08:31.511 "name": "BaseBdev1", 00:08:31.511 "uuid": "bc8f2c96-7115-4f39-adf3-e898b779a76c", 00:08:31.511 "is_configured": true, 00:08:31.511 "data_offset": 0, 00:08:31.511 "data_size": 65536 00:08:31.511 }, 00:08:31.511 { 00:08:31.511 "name": "BaseBdev2", 00:08:31.511 "uuid": "71c1975e-986b-402f-a0e6-f877c4d34a5f", 00:08:31.511 "is_configured": true, 00:08:31.511 "data_offset": 0, 00:08:31.511 "data_size": 65536 00:08:31.511 } 00:08:31.511 ] 00:08:31.511 }' 00:08:31.511 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.511 18:57:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.077 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:32.077 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:32.077 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:32.078 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:32.078 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:32.078 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:32.078 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:32.078 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:32.078 18:57:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.078 18:57:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.078 [2024-11-26 18:57:58.525434] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.078 18:57:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.078 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:32.078 "name": "Existed_Raid", 00:08:32.078 "aliases": [ 00:08:32.078 "100b52dd-55c1-4c0f-bc11-57281984b383" 00:08:32.078 ], 00:08:32.078 "product_name": "Raid Volume", 00:08:32.078 "block_size": 512, 00:08:32.078 "num_blocks": 65536, 00:08:32.078 "uuid": "100b52dd-55c1-4c0f-bc11-57281984b383", 00:08:32.078 "assigned_rate_limits": { 00:08:32.078 "rw_ios_per_sec": 0, 00:08:32.078 "rw_mbytes_per_sec": 0, 00:08:32.078 "r_mbytes_per_sec": 0, 00:08:32.078 "w_mbytes_per_sec": 0 00:08:32.078 }, 00:08:32.078 "claimed": false, 00:08:32.078 "zoned": false, 00:08:32.078 "supported_io_types": { 00:08:32.078 "read": true, 00:08:32.078 "write": true, 00:08:32.078 "unmap": false, 00:08:32.078 "flush": false, 00:08:32.078 "reset": true, 00:08:32.078 "nvme_admin": false, 00:08:32.078 "nvme_io": false, 00:08:32.078 "nvme_io_md": false, 00:08:32.078 "write_zeroes": true, 00:08:32.078 "zcopy": false, 00:08:32.078 "get_zone_info": false, 00:08:32.078 "zone_management": false, 00:08:32.078 "zone_append": false, 00:08:32.078 "compare": false, 00:08:32.078 "compare_and_write": false, 00:08:32.078 "abort": false, 00:08:32.078 "seek_hole": false, 00:08:32.078 "seek_data": false, 00:08:32.078 "copy": false, 00:08:32.078 "nvme_iov_md": false 00:08:32.078 }, 00:08:32.078 "memory_domains": [ 00:08:32.078 { 00:08:32.078 "dma_device_id": "system", 00:08:32.078 "dma_device_type": 1 00:08:32.078 }, 00:08:32.078 { 00:08:32.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.078 "dma_device_type": 2 00:08:32.078 }, 00:08:32.078 { 00:08:32.078 "dma_device_id": "system", 00:08:32.078 "dma_device_type": 1 00:08:32.078 }, 00:08:32.078 { 00:08:32.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.078 "dma_device_type": 2 00:08:32.078 } 00:08:32.078 ], 00:08:32.078 "driver_specific": { 00:08:32.078 "raid": { 00:08:32.078 "uuid": "100b52dd-55c1-4c0f-bc11-57281984b383", 00:08:32.078 "strip_size_kb": 0, 00:08:32.078 "state": "online", 00:08:32.078 "raid_level": "raid1", 00:08:32.078 "superblock": false, 00:08:32.078 "num_base_bdevs": 2, 00:08:32.078 "num_base_bdevs_discovered": 2, 00:08:32.078 "num_base_bdevs_operational": 2, 00:08:32.078 "base_bdevs_list": [ 00:08:32.078 { 00:08:32.078 "name": "BaseBdev1", 00:08:32.078 "uuid": "bc8f2c96-7115-4f39-adf3-e898b779a76c", 00:08:32.078 "is_configured": true, 00:08:32.078 "data_offset": 0, 00:08:32.078 "data_size": 65536 00:08:32.078 }, 00:08:32.078 { 00:08:32.078 "name": "BaseBdev2", 00:08:32.078 "uuid": "71c1975e-986b-402f-a0e6-f877c4d34a5f", 00:08:32.078 "is_configured": true, 00:08:32.078 "data_offset": 0, 00:08:32.078 "data_size": 65536 00:08:32.078 } 00:08:32.078 ] 00:08:32.078 } 00:08:32.078 } 00:08:32.078 }' 00:08:32.078 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:32.078 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:32.078 BaseBdev2' 00:08:32.078 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.078 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:32.078 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.078 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:32.078 18:57:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.078 18:57:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.078 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.078 18:57:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.336 [2024-11-26 18:57:58.813169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.336 18:57:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.594 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.594 "name": "Existed_Raid", 00:08:32.594 "uuid": "100b52dd-55c1-4c0f-bc11-57281984b383", 00:08:32.594 "strip_size_kb": 0, 00:08:32.594 "state": "online", 00:08:32.594 "raid_level": "raid1", 00:08:32.594 "superblock": false, 00:08:32.594 "num_base_bdevs": 2, 00:08:32.594 "num_base_bdevs_discovered": 1, 00:08:32.594 "num_base_bdevs_operational": 1, 00:08:32.594 "base_bdevs_list": [ 00:08:32.594 { 00:08:32.594 "name": null, 00:08:32.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.594 "is_configured": false, 00:08:32.594 "data_offset": 0, 00:08:32.594 "data_size": 65536 00:08:32.594 }, 00:08:32.594 { 00:08:32.594 "name": "BaseBdev2", 00:08:32.594 "uuid": "71c1975e-986b-402f-a0e6-f877c4d34a5f", 00:08:32.594 "is_configured": true, 00:08:32.594 "data_offset": 0, 00:08:32.594 "data_size": 65536 00:08:32.594 } 00:08:32.594 ] 00:08:32.594 }' 00:08:32.594 18:57:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.594 18:57:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.852 18:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:32.852 18:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:32.852 18:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.852 18:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:32.852 18:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.852 18:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.852 18:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.111 [2024-11-26 18:57:59.488397] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:33.111 [2024-11-26 18:57:59.488551] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.111 [2024-11-26 18:57:59.603411] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.111 [2024-11-26 18:57:59.603735] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.111 [2024-11-26 18:57:59.603962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62997 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62997 ']' 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62997 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62997 00:08:33.111 killing process with pid 62997 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62997' 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62997 00:08:33.111 [2024-11-26 18:57:59.681576] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:33.111 18:57:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62997 00:08:33.111 [2024-11-26 18:57:59.697966] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:34.486 ************************************ 00:08:34.486 END TEST raid_state_function_test 00:08:34.486 ************************************ 00:08:34.486 18:58:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:34.486 00:08:34.486 real 0m5.762s 00:08:34.486 user 0m8.524s 00:08:34.486 sys 0m0.837s 00:08:34.486 18:58:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.486 18:58:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.486 18:58:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:34.486 18:58:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:34.486 18:58:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.486 18:58:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:34.486 ************************************ 00:08:34.486 START TEST raid_state_function_test_sb 00:08:34.486 ************************************ 00:08:34.486 18:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:34.486 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:34.486 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:34.486 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:34.486 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:34.486 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:34.486 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:34.487 Process raid pid: 63257 00:08:34.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63257 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63257' 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63257 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63257 ']' 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.487 18:58:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.487 [2024-11-26 18:58:01.081144] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:08:34.487 [2024-11-26 18:58:01.081627] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.744 [2024-11-26 18:58:01.276414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.003 [2024-11-26 18:58:01.455532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.260 [2024-11-26 18:58:01.718499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.260 [2024-11-26 18:58:01.718784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.518 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.518 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:35.518 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:35.518 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.518 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.518 [2024-11-26 18:58:02.128885] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:35.518 [2024-11-26 18:58:02.128970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:35.518 [2024-11-26 18:58:02.128989] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:35.518 [2024-11-26 18:58:02.129005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:35.518 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.518 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:35.518 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.518 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.518 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.518 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.518 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:35.518 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.518 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.518 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.518 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.518 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.518 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.518 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.518 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.777 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.777 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.777 "name": "Existed_Raid", 00:08:35.777 "uuid": "9d03b738-3bf7-4600-9329-03a13023066b", 00:08:35.777 "strip_size_kb": 0, 00:08:35.777 "state": "configuring", 00:08:35.777 "raid_level": "raid1", 00:08:35.777 "superblock": true, 00:08:35.777 "num_base_bdevs": 2, 00:08:35.777 "num_base_bdevs_discovered": 0, 00:08:35.777 "num_base_bdevs_operational": 2, 00:08:35.777 "base_bdevs_list": [ 00:08:35.777 { 00:08:35.777 "name": "BaseBdev1", 00:08:35.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.777 "is_configured": false, 00:08:35.777 "data_offset": 0, 00:08:35.777 "data_size": 0 00:08:35.777 }, 00:08:35.777 { 00:08:35.777 "name": "BaseBdev2", 00:08:35.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.777 "is_configured": false, 00:08:35.777 "data_offset": 0, 00:08:35.777 "data_size": 0 00:08:35.777 } 00:08:35.777 ] 00:08:35.777 }' 00:08:35.777 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.777 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.342 [2024-11-26 18:58:02.664964] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:36.342 [2024-11-26 18:58:02.665016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.342 [2024-11-26 18:58:02.672983] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:36.342 [2024-11-26 18:58:02.673064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:36.342 [2024-11-26 18:58:02.673089] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:36.342 [2024-11-26 18:58:02.673122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.342 [2024-11-26 18:58:02.725397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.342 BaseBdev1 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.342 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.342 [ 00:08:36.342 { 00:08:36.342 "name": "BaseBdev1", 00:08:36.342 "aliases": [ 00:08:36.342 "113e6311-3fc5-4285-8fca-03e7eaf63a59" 00:08:36.342 ], 00:08:36.342 "product_name": "Malloc disk", 00:08:36.342 "block_size": 512, 00:08:36.342 "num_blocks": 65536, 00:08:36.342 "uuid": "113e6311-3fc5-4285-8fca-03e7eaf63a59", 00:08:36.342 "assigned_rate_limits": { 00:08:36.342 "rw_ios_per_sec": 0, 00:08:36.342 "rw_mbytes_per_sec": 0, 00:08:36.342 "r_mbytes_per_sec": 0, 00:08:36.342 "w_mbytes_per_sec": 0 00:08:36.342 }, 00:08:36.342 "claimed": true, 00:08:36.342 "claim_type": "exclusive_write", 00:08:36.342 "zoned": false, 00:08:36.342 "supported_io_types": { 00:08:36.342 "read": true, 00:08:36.342 "write": true, 00:08:36.342 "unmap": true, 00:08:36.342 "flush": true, 00:08:36.342 "reset": true, 00:08:36.342 "nvme_admin": false, 00:08:36.342 "nvme_io": false, 00:08:36.342 "nvme_io_md": false, 00:08:36.342 "write_zeroes": true, 00:08:36.342 "zcopy": true, 00:08:36.342 "get_zone_info": false, 00:08:36.342 "zone_management": false, 00:08:36.343 "zone_append": false, 00:08:36.343 "compare": false, 00:08:36.343 "compare_and_write": false, 00:08:36.343 "abort": true, 00:08:36.343 "seek_hole": false, 00:08:36.343 "seek_data": false, 00:08:36.343 "copy": true, 00:08:36.343 "nvme_iov_md": false 00:08:36.343 }, 00:08:36.343 "memory_domains": [ 00:08:36.343 { 00:08:36.343 "dma_device_id": "system", 00:08:36.343 "dma_device_type": 1 00:08:36.343 }, 00:08:36.343 { 00:08:36.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.343 "dma_device_type": 2 00:08:36.343 } 00:08:36.343 ], 00:08:36.343 "driver_specific": {} 00:08:36.343 } 00:08:36.343 ] 00:08:36.343 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.343 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:36.343 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:36.343 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.343 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.343 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:36.343 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:36.343 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:36.343 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.343 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.343 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.343 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.343 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.343 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.343 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.343 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.343 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.343 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.343 "name": "Existed_Raid", 00:08:36.343 "uuid": "90d98306-e8ce-44d0-b97e-de62525d1561", 00:08:36.343 "strip_size_kb": 0, 00:08:36.343 "state": "configuring", 00:08:36.343 "raid_level": "raid1", 00:08:36.343 "superblock": true, 00:08:36.343 "num_base_bdevs": 2, 00:08:36.343 "num_base_bdevs_discovered": 1, 00:08:36.343 "num_base_bdevs_operational": 2, 00:08:36.343 "base_bdevs_list": [ 00:08:36.343 { 00:08:36.343 "name": "BaseBdev1", 00:08:36.343 "uuid": "113e6311-3fc5-4285-8fca-03e7eaf63a59", 00:08:36.343 "is_configured": true, 00:08:36.343 "data_offset": 2048, 00:08:36.343 "data_size": 63488 00:08:36.343 }, 00:08:36.343 { 00:08:36.343 "name": "BaseBdev2", 00:08:36.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.343 "is_configured": false, 00:08:36.343 "data_offset": 0, 00:08:36.343 "data_size": 0 00:08:36.343 } 00:08:36.343 ] 00:08:36.343 }' 00:08:36.343 18:58:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.343 18:58:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.909 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:36.909 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.909 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.909 [2024-11-26 18:58:03.277637] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:36.909 [2024-11-26 18:58:03.277860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:36.909 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.909 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:36.909 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.909 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.909 [2024-11-26 18:58:03.289689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.909 [2024-11-26 18:58:03.292363] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:36.909 [2024-11-26 18:58:03.292566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:36.909 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.909 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:36.909 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:36.909 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:36.909 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.909 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.909 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:36.909 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:36.909 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:36.909 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.909 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.909 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.909 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.909 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.910 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.910 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.910 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.910 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.910 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.910 "name": "Existed_Raid", 00:08:36.910 "uuid": "0607f7ad-b49b-4e57-b55b-b10b2cb13178", 00:08:36.910 "strip_size_kb": 0, 00:08:36.910 "state": "configuring", 00:08:36.910 "raid_level": "raid1", 00:08:36.910 "superblock": true, 00:08:36.910 "num_base_bdevs": 2, 00:08:36.910 "num_base_bdevs_discovered": 1, 00:08:36.910 "num_base_bdevs_operational": 2, 00:08:36.910 "base_bdevs_list": [ 00:08:36.910 { 00:08:36.910 "name": "BaseBdev1", 00:08:36.910 "uuid": "113e6311-3fc5-4285-8fca-03e7eaf63a59", 00:08:36.910 "is_configured": true, 00:08:36.910 "data_offset": 2048, 00:08:36.910 "data_size": 63488 00:08:36.910 }, 00:08:36.910 { 00:08:36.910 "name": "BaseBdev2", 00:08:36.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.910 "is_configured": false, 00:08:36.910 "data_offset": 0, 00:08:36.910 "data_size": 0 00:08:36.910 } 00:08:36.910 ] 00:08:36.910 }' 00:08:36.910 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.910 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.167 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:37.167 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.167 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.425 [2024-11-26 18:58:03.817602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:37.425 [2024-11-26 18:58:03.818173] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:37.425 [2024-11-26 18:58:03.818346] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:37.425 [2024-11-26 18:58:03.818735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:37.425 BaseBdev2 00:08:37.425 [2024-11-26 18:58:03.819114] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:37.425 [2024-11-26 18:58:03.819243] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, ra 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.425 id_bdev 0x617000007e80 00:08:37.425 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:37.425 [2024-11-26 18:58:03.819680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.425 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:37.425 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:37.425 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:37.425 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:37.425 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:37.425 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:37.425 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.425 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.425 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.425 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:37.425 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.425 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.425 [ 00:08:37.425 { 00:08:37.425 "name": "BaseBdev2", 00:08:37.425 "aliases": [ 00:08:37.425 "ab173c48-d827-4b2f-a27f-91106153c649" 00:08:37.425 ], 00:08:37.425 "product_name": "Malloc disk", 00:08:37.425 "block_size": 512, 00:08:37.425 "num_blocks": 65536, 00:08:37.425 "uuid": "ab173c48-d827-4b2f-a27f-91106153c649", 00:08:37.425 "assigned_rate_limits": { 00:08:37.425 "rw_ios_per_sec": 0, 00:08:37.425 "rw_mbytes_per_sec": 0, 00:08:37.425 "r_mbytes_per_sec": 0, 00:08:37.425 "w_mbytes_per_sec": 0 00:08:37.425 }, 00:08:37.425 "claimed": true, 00:08:37.425 "claim_type": "exclusive_write", 00:08:37.425 "zoned": false, 00:08:37.425 "supported_io_types": { 00:08:37.425 "read": true, 00:08:37.425 "write": true, 00:08:37.425 "unmap": true, 00:08:37.425 "flush": true, 00:08:37.425 "reset": true, 00:08:37.425 "nvme_admin": false, 00:08:37.425 "nvme_io": false, 00:08:37.425 "nvme_io_md": false, 00:08:37.425 "write_zeroes": true, 00:08:37.425 "zcopy": true, 00:08:37.425 "get_zone_info": false, 00:08:37.425 "zone_management": false, 00:08:37.425 "zone_append": false, 00:08:37.426 "compare": false, 00:08:37.426 "compare_and_write": false, 00:08:37.426 "abort": true, 00:08:37.426 "seek_hole": false, 00:08:37.426 "seek_data": false, 00:08:37.426 "copy": true, 00:08:37.426 "nvme_iov_md": false 00:08:37.426 }, 00:08:37.426 "memory_domains": [ 00:08:37.426 { 00:08:37.426 "dma_device_id": "system", 00:08:37.426 "dma_device_type": 1 00:08:37.426 }, 00:08:37.426 { 00:08:37.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.426 "dma_device_type": 2 00:08:37.426 } 00:08:37.426 ], 00:08:37.426 "driver_specific": {} 00:08:37.426 } 00:08:37.426 ] 00:08:37.426 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.426 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:37.426 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:37.426 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:37.426 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:37.426 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.426 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.426 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:37.426 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:37.426 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.426 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.426 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.426 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.426 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.426 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.426 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.426 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.426 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.426 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.426 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.426 "name": "Existed_Raid", 00:08:37.426 "uuid": "0607f7ad-b49b-4e57-b55b-b10b2cb13178", 00:08:37.426 "strip_size_kb": 0, 00:08:37.426 "state": "online", 00:08:37.426 "raid_level": "raid1", 00:08:37.426 "superblock": true, 00:08:37.426 "num_base_bdevs": 2, 00:08:37.426 "num_base_bdevs_discovered": 2, 00:08:37.426 "num_base_bdevs_operational": 2, 00:08:37.426 "base_bdevs_list": [ 00:08:37.426 { 00:08:37.426 "name": "BaseBdev1", 00:08:37.426 "uuid": "113e6311-3fc5-4285-8fca-03e7eaf63a59", 00:08:37.426 "is_configured": true, 00:08:37.426 "data_offset": 2048, 00:08:37.426 "data_size": 63488 00:08:37.426 }, 00:08:37.426 { 00:08:37.426 "name": "BaseBdev2", 00:08:37.426 "uuid": "ab173c48-d827-4b2f-a27f-91106153c649", 00:08:37.426 "is_configured": true, 00:08:37.426 "data_offset": 2048, 00:08:37.426 "data_size": 63488 00:08:37.426 } 00:08:37.426 ] 00:08:37.426 }' 00:08:37.426 18:58:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.426 18:58:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.994 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:37.994 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:37.994 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:37.994 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:37.994 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:37.994 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:37.994 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:37.994 18:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.994 18:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.994 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:37.994 [2024-11-26 18:58:04.418198] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:37.994 18:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.994 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:37.994 "name": "Existed_Raid", 00:08:37.994 "aliases": [ 00:08:37.994 "0607f7ad-b49b-4e57-b55b-b10b2cb13178" 00:08:37.994 ], 00:08:37.994 "product_name": "Raid Volume", 00:08:37.994 "block_size": 512, 00:08:37.994 "num_blocks": 63488, 00:08:37.994 "uuid": "0607f7ad-b49b-4e57-b55b-b10b2cb13178", 00:08:37.994 "assigned_rate_limits": { 00:08:37.994 "rw_ios_per_sec": 0, 00:08:37.994 "rw_mbytes_per_sec": 0, 00:08:37.994 "r_mbytes_per_sec": 0, 00:08:37.994 "w_mbytes_per_sec": 0 00:08:37.994 }, 00:08:37.994 "claimed": false, 00:08:37.994 "zoned": false, 00:08:37.994 "supported_io_types": { 00:08:37.994 "read": true, 00:08:37.994 "write": true, 00:08:37.994 "unmap": false, 00:08:37.994 "flush": false, 00:08:37.994 "reset": true, 00:08:37.994 "nvme_admin": false, 00:08:37.994 "nvme_io": false, 00:08:37.994 "nvme_io_md": false, 00:08:37.994 "write_zeroes": true, 00:08:37.994 "zcopy": false, 00:08:37.994 "get_zone_info": false, 00:08:37.994 "zone_management": false, 00:08:37.994 "zone_append": false, 00:08:37.994 "compare": false, 00:08:37.994 "compare_and_write": false, 00:08:37.994 "abort": false, 00:08:37.994 "seek_hole": false, 00:08:37.994 "seek_data": false, 00:08:37.994 "copy": false, 00:08:37.994 "nvme_iov_md": false 00:08:37.994 }, 00:08:37.994 "memory_domains": [ 00:08:37.994 { 00:08:37.994 "dma_device_id": "system", 00:08:37.994 "dma_device_type": 1 00:08:37.994 }, 00:08:37.994 { 00:08:37.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.994 "dma_device_type": 2 00:08:37.994 }, 00:08:37.994 { 00:08:37.994 "dma_device_id": "system", 00:08:37.994 "dma_device_type": 1 00:08:37.994 }, 00:08:37.994 { 00:08:37.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.994 "dma_device_type": 2 00:08:37.994 } 00:08:37.994 ], 00:08:37.994 "driver_specific": { 00:08:37.994 "raid": { 00:08:37.994 "uuid": "0607f7ad-b49b-4e57-b55b-b10b2cb13178", 00:08:37.994 "strip_size_kb": 0, 00:08:37.994 "state": "online", 00:08:37.994 "raid_level": "raid1", 00:08:37.994 "superblock": true, 00:08:37.994 "num_base_bdevs": 2, 00:08:37.994 "num_base_bdevs_discovered": 2, 00:08:37.994 "num_base_bdevs_operational": 2, 00:08:37.994 "base_bdevs_list": [ 00:08:37.994 { 00:08:37.994 "name": "BaseBdev1", 00:08:37.994 "uuid": "113e6311-3fc5-4285-8fca-03e7eaf63a59", 00:08:37.994 "is_configured": true, 00:08:37.994 "data_offset": 2048, 00:08:37.994 "data_size": 63488 00:08:37.994 }, 00:08:37.994 { 00:08:37.994 "name": "BaseBdev2", 00:08:37.994 "uuid": "ab173c48-d827-4b2f-a27f-91106153c649", 00:08:37.994 "is_configured": true, 00:08:37.994 "data_offset": 2048, 00:08:37.994 "data_size": 63488 00:08:37.994 } 00:08:37.994 ] 00:08:37.994 } 00:08:37.994 } 00:08:37.994 }' 00:08:37.994 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:37.994 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:37.994 BaseBdev2' 00:08:37.994 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.994 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:37.994 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.994 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:37.994 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.994 18:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.994 18:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.994 18:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.253 [2024-11-26 18:58:04.689933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.253 "name": "Existed_Raid", 00:08:38.253 "uuid": "0607f7ad-b49b-4e57-b55b-b10b2cb13178", 00:08:38.253 "strip_size_kb": 0, 00:08:38.253 "state": "online", 00:08:38.253 "raid_level": "raid1", 00:08:38.253 "superblock": true, 00:08:38.253 "num_base_bdevs": 2, 00:08:38.253 "num_base_bdevs_discovered": 1, 00:08:38.253 "num_base_bdevs_operational": 1, 00:08:38.253 "base_bdevs_list": [ 00:08:38.253 { 00:08:38.253 "name": null, 00:08:38.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.253 "is_configured": false, 00:08:38.253 "data_offset": 0, 00:08:38.253 "data_size": 63488 00:08:38.253 }, 00:08:38.253 { 00:08:38.253 "name": "BaseBdev2", 00:08:38.253 "uuid": "ab173c48-d827-4b2f-a27f-91106153c649", 00:08:38.253 "is_configured": true, 00:08:38.253 "data_offset": 2048, 00:08:38.253 "data_size": 63488 00:08:38.253 } 00:08:38.253 ] 00:08:38.253 }' 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.253 18:58:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.878 18:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:38.878 18:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:38.878 18:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.878 18:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.878 18:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:38.878 18:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.878 18:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.878 18:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:38.878 18:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:38.878 18:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:38.878 18:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.878 18:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.878 [2024-11-26 18:58:05.356854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:38.878 [2024-11-26 18:58:05.356997] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:38.878 [2024-11-26 18:58:05.451970] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.878 [2024-11-26 18:58:05.452262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.878 [2024-11-26 18:58:05.452440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:38.878 18:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.878 18:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:38.878 18:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:38.878 18:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.878 18:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:38.878 18:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.878 18:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.878 18:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.137 18:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:39.137 18:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:39.137 18:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:39.137 18:58:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63257 00:08:39.137 18:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63257 ']' 00:08:39.137 18:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63257 00:08:39.137 18:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:39.137 18:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.137 18:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63257 00:08:39.137 killing process with pid 63257 00:08:39.137 18:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.137 18:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.137 18:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63257' 00:08:39.137 18:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63257 00:08:39.137 [2024-11-26 18:58:05.546601] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:39.137 18:58:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63257 00:08:39.137 [2024-11-26 18:58:05.562254] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:40.511 ************************************ 00:08:40.511 END TEST raid_state_function_test_sb 00:08:40.511 ************************************ 00:08:40.511 18:58:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:40.511 00:08:40.511 real 0m5.762s 00:08:40.511 user 0m8.573s 00:08:40.511 sys 0m0.876s 00:08:40.511 18:58:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.511 18:58:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.511 18:58:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:40.511 18:58:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:40.511 18:58:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.511 18:58:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:40.511 ************************************ 00:08:40.511 START TEST raid_superblock_test 00:08:40.511 ************************************ 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63515 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63515 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63515 ']' 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.511 18:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.511 [2024-11-26 18:58:06.902709] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:08:40.511 [2024-11-26 18:58:06.902935] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63515 ] 00:08:40.511 [2024-11-26 18:58:07.088442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.769 [2024-11-26 18:58:07.236579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.027 [2024-11-26 18:58:07.461569] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.027 [2024-11-26 18:58:07.461632] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.594 18:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:41.594 18:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:41.594 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.595 malloc1 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.595 [2024-11-26 18:58:07.970072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:41.595 [2024-11-26 18:58:07.970151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.595 [2024-11-26 18:58:07.970187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:41.595 [2024-11-26 18:58:07.970202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.595 [2024-11-26 18:58:07.973076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.595 [2024-11-26 18:58:07.973274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:41.595 pt1 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.595 18:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.595 malloc2 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.595 [2024-11-26 18:58:08.025786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:41.595 [2024-11-26 18:58:08.025863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.595 [2024-11-26 18:58:08.025904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:41.595 [2024-11-26 18:58:08.025919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.595 [2024-11-26 18:58:08.028793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.595 [2024-11-26 18:58:08.028838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:41.595 pt2 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.595 [2024-11-26 18:58:08.033839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:41.595 [2024-11-26 18:58:08.036363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:41.595 [2024-11-26 18:58:08.036580] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:41.595 [2024-11-26 18:58:08.036604] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:41.595 [2024-11-26 18:58:08.036908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:41.595 [2024-11-26 18:58:08.037115] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:41.595 [2024-11-26 18:58:08.037164] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:41.595 [2024-11-26 18:58:08.037373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.595 "name": "raid_bdev1", 00:08:41.595 "uuid": "8914e09e-d946-4b8c-a301-75bba23d288a", 00:08:41.595 "strip_size_kb": 0, 00:08:41.595 "state": "online", 00:08:41.595 "raid_level": "raid1", 00:08:41.595 "superblock": true, 00:08:41.595 "num_base_bdevs": 2, 00:08:41.595 "num_base_bdevs_discovered": 2, 00:08:41.595 "num_base_bdevs_operational": 2, 00:08:41.595 "base_bdevs_list": [ 00:08:41.595 { 00:08:41.595 "name": "pt1", 00:08:41.595 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:41.595 "is_configured": true, 00:08:41.595 "data_offset": 2048, 00:08:41.595 "data_size": 63488 00:08:41.595 }, 00:08:41.595 { 00:08:41.595 "name": "pt2", 00:08:41.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:41.595 "is_configured": true, 00:08:41.595 "data_offset": 2048, 00:08:41.595 "data_size": 63488 00:08:41.595 } 00:08:41.595 ] 00:08:41.595 }' 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.595 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.162 [2024-11-26 18:58:08.566355] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:42.162 "name": "raid_bdev1", 00:08:42.162 "aliases": [ 00:08:42.162 "8914e09e-d946-4b8c-a301-75bba23d288a" 00:08:42.162 ], 00:08:42.162 "product_name": "Raid Volume", 00:08:42.162 "block_size": 512, 00:08:42.162 "num_blocks": 63488, 00:08:42.162 "uuid": "8914e09e-d946-4b8c-a301-75bba23d288a", 00:08:42.162 "assigned_rate_limits": { 00:08:42.162 "rw_ios_per_sec": 0, 00:08:42.162 "rw_mbytes_per_sec": 0, 00:08:42.162 "r_mbytes_per_sec": 0, 00:08:42.162 "w_mbytes_per_sec": 0 00:08:42.162 }, 00:08:42.162 "claimed": false, 00:08:42.162 "zoned": false, 00:08:42.162 "supported_io_types": { 00:08:42.162 "read": true, 00:08:42.162 "write": true, 00:08:42.162 "unmap": false, 00:08:42.162 "flush": false, 00:08:42.162 "reset": true, 00:08:42.162 "nvme_admin": false, 00:08:42.162 "nvme_io": false, 00:08:42.162 "nvme_io_md": false, 00:08:42.162 "write_zeroes": true, 00:08:42.162 "zcopy": false, 00:08:42.162 "get_zone_info": false, 00:08:42.162 "zone_management": false, 00:08:42.162 "zone_append": false, 00:08:42.162 "compare": false, 00:08:42.162 "compare_and_write": false, 00:08:42.162 "abort": false, 00:08:42.162 "seek_hole": false, 00:08:42.162 "seek_data": false, 00:08:42.162 "copy": false, 00:08:42.162 "nvme_iov_md": false 00:08:42.162 }, 00:08:42.162 "memory_domains": [ 00:08:42.162 { 00:08:42.162 "dma_device_id": "system", 00:08:42.162 "dma_device_type": 1 00:08:42.162 }, 00:08:42.162 { 00:08:42.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.162 "dma_device_type": 2 00:08:42.162 }, 00:08:42.162 { 00:08:42.162 "dma_device_id": "system", 00:08:42.162 "dma_device_type": 1 00:08:42.162 }, 00:08:42.162 { 00:08:42.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.162 "dma_device_type": 2 00:08:42.162 } 00:08:42.162 ], 00:08:42.162 "driver_specific": { 00:08:42.162 "raid": { 00:08:42.162 "uuid": "8914e09e-d946-4b8c-a301-75bba23d288a", 00:08:42.162 "strip_size_kb": 0, 00:08:42.162 "state": "online", 00:08:42.162 "raid_level": "raid1", 00:08:42.162 "superblock": true, 00:08:42.162 "num_base_bdevs": 2, 00:08:42.162 "num_base_bdevs_discovered": 2, 00:08:42.162 "num_base_bdevs_operational": 2, 00:08:42.162 "base_bdevs_list": [ 00:08:42.162 { 00:08:42.162 "name": "pt1", 00:08:42.162 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.162 "is_configured": true, 00:08:42.162 "data_offset": 2048, 00:08:42.162 "data_size": 63488 00:08:42.162 }, 00:08:42.162 { 00:08:42.162 "name": "pt2", 00:08:42.162 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.162 "is_configured": true, 00:08:42.162 "data_offset": 2048, 00:08:42.162 "data_size": 63488 00:08:42.162 } 00:08:42.162 ] 00:08:42.162 } 00:08:42.162 } 00:08:42.162 }' 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:42.162 pt2' 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.162 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:42.421 [2024-11-26 18:58:08.834413] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8914e09e-d946-4b8c-a301-75bba23d288a 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8914e09e-d946-4b8c-a301-75bba23d288a ']' 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.421 [2024-11-26 18:58:08.894002] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.421 [2024-11-26 18:58:08.894165] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.421 [2024-11-26 18:58:08.894364] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.421 [2024-11-26 18:58:08.894452] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.421 [2024-11-26 18:58:08.894473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.421 18:58:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.421 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:42.421 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:42.422 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:42.422 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:42.422 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:42.422 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.422 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:42.422 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.422 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:42.422 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.422 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.422 [2024-11-26 18:58:09.026107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:42.422 [2024-11-26 18:58:09.028864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:42.422 [2024-11-26 18:58:09.028968] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:42.422 [2024-11-26 18:58:09.029050] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:42.422 [2024-11-26 18:58:09.029078] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.422 [2024-11-26 18:58:09.029094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:42.422 request: 00:08:42.422 { 00:08:42.422 "name": "raid_bdev1", 00:08:42.422 "raid_level": "raid1", 00:08:42.422 "base_bdevs": [ 00:08:42.422 "malloc1", 00:08:42.422 "malloc2" 00:08:42.422 ], 00:08:42.422 "superblock": false, 00:08:42.422 "method": "bdev_raid_create", 00:08:42.422 "req_id": 1 00:08:42.422 } 00:08:42.422 Got JSON-RPC error response 00:08:42.422 response: 00:08:42.422 { 00:08:42.422 "code": -17, 00:08:42.422 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:42.422 } 00:08:42.422 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:42.422 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:42.422 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:42.422 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:42.422 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:42.422 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.422 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:42.422 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.422 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.681 [2024-11-26 18:58:09.098101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:42.681 [2024-11-26 18:58:09.098350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.681 [2024-11-26 18:58:09.098534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:42.681 [2024-11-26 18:58:09.098697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.681 [2024-11-26 18:58:09.101943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.681 [2024-11-26 18:58:09.102111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:42.681 [2024-11-26 18:58:09.102358] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:42.681 [2024-11-26 18:58:09.102537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:42.681 pt1 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.681 "name": "raid_bdev1", 00:08:42.681 "uuid": "8914e09e-d946-4b8c-a301-75bba23d288a", 00:08:42.681 "strip_size_kb": 0, 00:08:42.681 "state": "configuring", 00:08:42.681 "raid_level": "raid1", 00:08:42.681 "superblock": true, 00:08:42.681 "num_base_bdevs": 2, 00:08:42.681 "num_base_bdevs_discovered": 1, 00:08:42.681 "num_base_bdevs_operational": 2, 00:08:42.681 "base_bdevs_list": [ 00:08:42.681 { 00:08:42.681 "name": "pt1", 00:08:42.681 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.681 "is_configured": true, 00:08:42.681 "data_offset": 2048, 00:08:42.681 "data_size": 63488 00:08:42.681 }, 00:08:42.681 { 00:08:42.681 "name": null, 00:08:42.681 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.681 "is_configured": false, 00:08:42.681 "data_offset": 2048, 00:08:42.681 "data_size": 63488 00:08:42.681 } 00:08:42.681 ] 00:08:42.681 }' 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.681 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.247 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:43.247 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:43.247 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:43.247 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:43.247 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.247 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.247 [2024-11-26 18:58:09.618640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:43.247 [2024-11-26 18:58:09.618746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.247 [2024-11-26 18:58:09.618782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:43.247 [2024-11-26 18:58:09.618800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.247 [2024-11-26 18:58:09.619454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.247 [2024-11-26 18:58:09.619493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:43.247 [2024-11-26 18:58:09.619609] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:43.247 [2024-11-26 18:58:09.619654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:43.247 [2024-11-26 18:58:09.619812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:43.247 [2024-11-26 18:58:09.619834] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:43.247 [2024-11-26 18:58:09.620163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:43.248 [2024-11-26 18:58:09.620387] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:43.248 [2024-11-26 18:58:09.620403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:43.248 [2024-11-26 18:58:09.620584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.248 pt2 00:08:43.248 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.248 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:43.248 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:43.248 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:43.248 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.248 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.248 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.248 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.248 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.248 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.248 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.248 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.248 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.248 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.248 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.248 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.248 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.248 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.248 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.248 "name": "raid_bdev1", 00:08:43.248 "uuid": "8914e09e-d946-4b8c-a301-75bba23d288a", 00:08:43.248 "strip_size_kb": 0, 00:08:43.248 "state": "online", 00:08:43.248 "raid_level": "raid1", 00:08:43.248 "superblock": true, 00:08:43.248 "num_base_bdevs": 2, 00:08:43.248 "num_base_bdevs_discovered": 2, 00:08:43.248 "num_base_bdevs_operational": 2, 00:08:43.248 "base_bdevs_list": [ 00:08:43.248 { 00:08:43.248 "name": "pt1", 00:08:43.248 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.248 "is_configured": true, 00:08:43.248 "data_offset": 2048, 00:08:43.248 "data_size": 63488 00:08:43.248 }, 00:08:43.248 { 00:08:43.248 "name": "pt2", 00:08:43.248 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.248 "is_configured": true, 00:08:43.248 "data_offset": 2048, 00:08:43.248 "data_size": 63488 00:08:43.248 } 00:08:43.248 ] 00:08:43.248 }' 00:08:43.248 18:58:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.248 18:58:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.506 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:43.506 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:43.506 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:43.506 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:43.506 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:43.506 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:43.506 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:43.506 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:43.506 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.506 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.506 [2024-11-26 18:58:10.115082] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.765 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.765 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:43.765 "name": "raid_bdev1", 00:08:43.765 "aliases": [ 00:08:43.765 "8914e09e-d946-4b8c-a301-75bba23d288a" 00:08:43.765 ], 00:08:43.765 "product_name": "Raid Volume", 00:08:43.765 "block_size": 512, 00:08:43.765 "num_blocks": 63488, 00:08:43.765 "uuid": "8914e09e-d946-4b8c-a301-75bba23d288a", 00:08:43.765 "assigned_rate_limits": { 00:08:43.765 "rw_ios_per_sec": 0, 00:08:43.765 "rw_mbytes_per_sec": 0, 00:08:43.765 "r_mbytes_per_sec": 0, 00:08:43.765 "w_mbytes_per_sec": 0 00:08:43.765 }, 00:08:43.765 "claimed": false, 00:08:43.765 "zoned": false, 00:08:43.765 "supported_io_types": { 00:08:43.765 "read": true, 00:08:43.765 "write": true, 00:08:43.765 "unmap": false, 00:08:43.765 "flush": false, 00:08:43.765 "reset": true, 00:08:43.765 "nvme_admin": false, 00:08:43.765 "nvme_io": false, 00:08:43.765 "nvme_io_md": false, 00:08:43.765 "write_zeroes": true, 00:08:43.765 "zcopy": false, 00:08:43.765 "get_zone_info": false, 00:08:43.765 "zone_management": false, 00:08:43.765 "zone_append": false, 00:08:43.765 "compare": false, 00:08:43.765 "compare_and_write": false, 00:08:43.765 "abort": false, 00:08:43.765 "seek_hole": false, 00:08:43.765 "seek_data": false, 00:08:43.765 "copy": false, 00:08:43.765 "nvme_iov_md": false 00:08:43.765 }, 00:08:43.765 "memory_domains": [ 00:08:43.765 { 00:08:43.765 "dma_device_id": "system", 00:08:43.765 "dma_device_type": 1 00:08:43.765 }, 00:08:43.765 { 00:08:43.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.765 "dma_device_type": 2 00:08:43.765 }, 00:08:43.765 { 00:08:43.765 "dma_device_id": "system", 00:08:43.765 "dma_device_type": 1 00:08:43.765 }, 00:08:43.765 { 00:08:43.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.765 "dma_device_type": 2 00:08:43.765 } 00:08:43.765 ], 00:08:43.765 "driver_specific": { 00:08:43.765 "raid": { 00:08:43.765 "uuid": "8914e09e-d946-4b8c-a301-75bba23d288a", 00:08:43.765 "strip_size_kb": 0, 00:08:43.765 "state": "online", 00:08:43.765 "raid_level": "raid1", 00:08:43.765 "superblock": true, 00:08:43.765 "num_base_bdevs": 2, 00:08:43.765 "num_base_bdevs_discovered": 2, 00:08:43.765 "num_base_bdevs_operational": 2, 00:08:43.765 "base_bdevs_list": [ 00:08:43.765 { 00:08:43.765 "name": "pt1", 00:08:43.765 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.765 "is_configured": true, 00:08:43.765 "data_offset": 2048, 00:08:43.765 "data_size": 63488 00:08:43.765 }, 00:08:43.765 { 00:08:43.765 "name": "pt2", 00:08:43.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.765 "is_configured": true, 00:08:43.765 "data_offset": 2048, 00:08:43.765 "data_size": 63488 00:08:43.765 } 00:08:43.765 ] 00:08:43.765 } 00:08:43.765 } 00:08:43.765 }' 00:08:43.765 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:43.766 pt2' 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:43.766 [2024-11-26 18:58:10.363101] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.766 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8914e09e-d946-4b8c-a301-75bba23d288a '!=' 8914e09e-d946-4b8c-a301-75bba23d288a ']' 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.024 [2024-11-26 18:58:10.414812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.024 "name": "raid_bdev1", 00:08:44.024 "uuid": "8914e09e-d946-4b8c-a301-75bba23d288a", 00:08:44.024 "strip_size_kb": 0, 00:08:44.024 "state": "online", 00:08:44.024 "raid_level": "raid1", 00:08:44.024 "superblock": true, 00:08:44.024 "num_base_bdevs": 2, 00:08:44.024 "num_base_bdevs_discovered": 1, 00:08:44.024 "num_base_bdevs_operational": 1, 00:08:44.024 "base_bdevs_list": [ 00:08:44.024 { 00:08:44.024 "name": null, 00:08:44.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.024 "is_configured": false, 00:08:44.024 "data_offset": 0, 00:08:44.024 "data_size": 63488 00:08:44.024 }, 00:08:44.024 { 00:08:44.024 "name": "pt2", 00:08:44.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.024 "is_configured": true, 00:08:44.024 "data_offset": 2048, 00:08:44.024 "data_size": 63488 00:08:44.024 } 00:08:44.024 ] 00:08:44.024 }' 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.024 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.283 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:44.283 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.283 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.283 [2024-11-26 18:58:10.894926] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.283 [2024-11-26 18:58:10.894962] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.283 [2024-11-26 18:58:10.895107] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.283 [2024-11-26 18:58:10.895179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.283 [2024-11-26 18:58:10.895211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:44.283 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.283 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.283 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.283 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.283 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.608 [2024-11-26 18:58:10.958873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:44.608 [2024-11-26 18:58:10.958955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.608 [2024-11-26 18:58:10.958981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:44.608 [2024-11-26 18:58:10.958997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.608 [2024-11-26 18:58:10.962194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.608 [2024-11-26 18:58:10.962394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:44.608 [2024-11-26 18:58:10.962524] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:44.608 [2024-11-26 18:58:10.962592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:44.608 [2024-11-26 18:58:10.962728] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:44.608 [2024-11-26 18:58:10.962751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:44.608 [2024-11-26 18:58:10.963103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:44.608 [2024-11-26 18:58:10.963302] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:44.608 [2024-11-26 18:58:10.963319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:44.608 [2024-11-26 18:58:10.963575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.608 pt2 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.608 18:58:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.608 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.608 "name": "raid_bdev1", 00:08:44.608 "uuid": "8914e09e-d946-4b8c-a301-75bba23d288a", 00:08:44.608 "strip_size_kb": 0, 00:08:44.608 "state": "online", 00:08:44.608 "raid_level": "raid1", 00:08:44.608 "superblock": true, 00:08:44.608 "num_base_bdevs": 2, 00:08:44.608 "num_base_bdevs_discovered": 1, 00:08:44.608 "num_base_bdevs_operational": 1, 00:08:44.608 "base_bdevs_list": [ 00:08:44.608 { 00:08:44.608 "name": null, 00:08:44.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.609 "is_configured": false, 00:08:44.609 "data_offset": 2048, 00:08:44.609 "data_size": 63488 00:08:44.609 }, 00:08:44.609 { 00:08:44.609 "name": "pt2", 00:08:44.609 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.609 "is_configured": true, 00:08:44.609 "data_offset": 2048, 00:08:44.609 "data_size": 63488 00:08:44.609 } 00:08:44.609 ] 00:08:44.609 }' 00:08:44.609 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.609 18:58:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.955 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:44.955 18:58:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.955 18:58:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.955 [2024-11-26 18:58:11.515636] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.955 [2024-11-26 18:58:11.515679] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.955 [2024-11-26 18:58:11.515794] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.955 [2024-11-26 18:58:11.515877] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.955 [2024-11-26 18:58:11.515895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:44.955 18:58:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.955 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:44.955 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.955 18:58:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.955 18:58:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.955 18:58:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.955 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:44.956 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:44.956 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:44.956 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:44.956 18:58:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.956 18:58:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.956 [2024-11-26 18:58:11.575694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:44.956 [2024-11-26 18:58:11.575782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.956 [2024-11-26 18:58:11.575818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:44.956 [2024-11-26 18:58:11.575833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.214 [2024-11-26 18:58:11.579038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.214 [2024-11-26 18:58:11.579086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:45.214 [2024-11-26 18:58:11.579216] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:45.214 [2024-11-26 18:58:11.579279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:45.214 [2024-11-26 18:58:11.579484] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:45.214 [2024-11-26 18:58:11.579504] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:45.214 [2024-11-26 18:58:11.579528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:45.214 [2024-11-26 18:58:11.579607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:45.215 [2024-11-26 18:58:11.579900] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:45.215 [2024-11-26 18:58:11.579925] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:45.215 [2024-11-26 18:58:11.580257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:45.215 [2024-11-26 18:58:11.580479] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:45.215 [2024-11-26 18:58:11.580502] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:45.215 [2024-11-26 18:58:11.580753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.215 pt1 00:08:45.215 18:58:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.215 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:45.215 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:45.215 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.215 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.215 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.215 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.215 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:45.215 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.215 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.215 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.215 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.215 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.215 18:58:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.215 18:58:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.215 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.215 18:58:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.215 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.215 "name": "raid_bdev1", 00:08:45.215 "uuid": "8914e09e-d946-4b8c-a301-75bba23d288a", 00:08:45.215 "strip_size_kb": 0, 00:08:45.215 "state": "online", 00:08:45.215 "raid_level": "raid1", 00:08:45.215 "superblock": true, 00:08:45.215 "num_base_bdevs": 2, 00:08:45.215 "num_base_bdevs_discovered": 1, 00:08:45.215 "num_base_bdevs_operational": 1, 00:08:45.215 "base_bdevs_list": [ 00:08:45.215 { 00:08:45.215 "name": null, 00:08:45.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.215 "is_configured": false, 00:08:45.215 "data_offset": 2048, 00:08:45.215 "data_size": 63488 00:08:45.215 }, 00:08:45.215 { 00:08:45.215 "name": "pt2", 00:08:45.215 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.215 "is_configured": true, 00:08:45.215 "data_offset": 2048, 00:08:45.215 "data_size": 63488 00:08:45.215 } 00:08:45.215 ] 00:08:45.215 }' 00:08:45.215 18:58:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.215 18:58:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.474 18:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:45.474 18:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:45.474 18:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.474 18:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.474 18:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.733 18:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:45.733 18:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:45.733 18:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:45.733 18:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.733 18:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.733 [2024-11-26 18:58:12.120548] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.733 18:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.733 18:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8914e09e-d946-4b8c-a301-75bba23d288a '!=' 8914e09e-d946-4b8c-a301-75bba23d288a ']' 00:08:45.733 18:58:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63515 00:08:45.733 18:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63515 ']' 00:08:45.733 18:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63515 00:08:45.733 18:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:45.733 18:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.733 18:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63515 00:08:45.733 killing process with pid 63515 00:08:45.733 18:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:45.733 18:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:45.733 18:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63515' 00:08:45.733 18:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63515 00:08:45.733 18:58:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63515 00:08:45.733 [2024-11-26 18:58:12.195889] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:45.733 [2024-11-26 18:58:12.196059] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.733 [2024-11-26 18:58:12.196164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.733 [2024-11-26 18:58:12.196206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:45.991 [2024-11-26 18:58:12.395265] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.368 18:58:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:47.368 00:08:47.368 real 0m6.802s 00:08:47.368 user 0m10.555s 00:08:47.368 sys 0m1.029s 00:08:47.368 18:58:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.368 18:58:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.368 ************************************ 00:08:47.368 END TEST raid_superblock_test 00:08:47.368 ************************************ 00:08:47.368 18:58:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:47.368 18:58:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:47.368 18:58:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.368 18:58:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.368 ************************************ 00:08:47.368 START TEST raid_read_error_test 00:08:47.368 ************************************ 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.R3MI1iparQ 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63850 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63850 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63850 ']' 00:08:47.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.368 18:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.368 [2024-11-26 18:58:13.779927] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:08:47.368 [2024-11-26 18:58:13.780105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63850 ] 00:08:47.368 [2024-11-26 18:58:13.973600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.626 [2024-11-26 18:58:14.136332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.883 [2024-11-26 18:58:14.355517] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.884 [2024-11-26 18:58:14.355584] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.141 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.141 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:48.141 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.141 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:48.141 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.141 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.398 BaseBdev1_malloc 00:08:48.398 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.398 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:48.398 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.399 true 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.399 [2024-11-26 18:58:14.812073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:48.399 [2024-11-26 18:58:14.812159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.399 [2024-11-26 18:58:14.812190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:48.399 [2024-11-26 18:58:14.812210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.399 [2024-11-26 18:58:14.815202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.399 [2024-11-26 18:58:14.815252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:48.399 BaseBdev1 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.399 BaseBdev2_malloc 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.399 true 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.399 [2024-11-26 18:58:14.880908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:48.399 [2024-11-26 18:58:14.880980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.399 [2024-11-26 18:58:14.881006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:48.399 [2024-11-26 18:58:14.881024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.399 [2024-11-26 18:58:14.884037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.399 [2024-11-26 18:58:14.884089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:48.399 BaseBdev2 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.399 [2024-11-26 18:58:14.889059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.399 [2024-11-26 18:58:14.891963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.399 [2024-11-26 18:58:14.892417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:48.399 [2024-11-26 18:58:14.892567] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:48.399 [2024-11-26 18:58:14.892925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:48.399 [2024-11-26 18:58:14.893330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:48.399 [2024-11-26 18:58:14.893461] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:48.399 [2024-11-26 18:58:14.893850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.399 "name": "raid_bdev1", 00:08:48.399 "uuid": "177d8eef-4e87-423e-b359-c0762fe0a328", 00:08:48.399 "strip_size_kb": 0, 00:08:48.399 "state": "online", 00:08:48.399 "raid_level": "raid1", 00:08:48.399 "superblock": true, 00:08:48.399 "num_base_bdevs": 2, 00:08:48.399 "num_base_bdevs_discovered": 2, 00:08:48.399 "num_base_bdevs_operational": 2, 00:08:48.399 "base_bdevs_list": [ 00:08:48.399 { 00:08:48.399 "name": "BaseBdev1", 00:08:48.399 "uuid": "05ffb429-5032-5c4b-9dcb-ca496a678504", 00:08:48.399 "is_configured": true, 00:08:48.399 "data_offset": 2048, 00:08:48.399 "data_size": 63488 00:08:48.399 }, 00:08:48.399 { 00:08:48.399 "name": "BaseBdev2", 00:08:48.399 "uuid": "0fef8d94-ea33-5759-a084-300332870d98", 00:08:48.399 "is_configured": true, 00:08:48.399 "data_offset": 2048, 00:08:48.399 "data_size": 63488 00:08:48.399 } 00:08:48.399 ] 00:08:48.399 }' 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.399 18:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.965 18:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:48.965 18:58:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:48.965 [2024-11-26 18:58:15.539566] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.896 "name": "raid_bdev1", 00:08:49.896 "uuid": "177d8eef-4e87-423e-b359-c0762fe0a328", 00:08:49.896 "strip_size_kb": 0, 00:08:49.896 "state": "online", 00:08:49.896 "raid_level": "raid1", 00:08:49.896 "superblock": true, 00:08:49.896 "num_base_bdevs": 2, 00:08:49.896 "num_base_bdevs_discovered": 2, 00:08:49.896 "num_base_bdevs_operational": 2, 00:08:49.896 "base_bdevs_list": [ 00:08:49.896 { 00:08:49.896 "name": "BaseBdev1", 00:08:49.896 "uuid": "05ffb429-5032-5c4b-9dcb-ca496a678504", 00:08:49.896 "is_configured": true, 00:08:49.896 "data_offset": 2048, 00:08:49.896 "data_size": 63488 00:08:49.896 }, 00:08:49.896 { 00:08:49.896 "name": "BaseBdev2", 00:08:49.896 "uuid": "0fef8d94-ea33-5759-a084-300332870d98", 00:08:49.896 "is_configured": true, 00:08:49.896 "data_offset": 2048, 00:08:49.896 "data_size": 63488 00:08:49.896 } 00:08:49.896 ] 00:08:49.896 }' 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.896 18:58:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.462 18:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:50.462 18:58:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.462 18:58:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.462 [2024-11-26 18:58:16.945417] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:50.462 [2024-11-26 18:58:16.945613] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.462 [2024-11-26 18:58:16.949148] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.462 [2024-11-26 18:58:16.949353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.462 [2024-11-26 18:58:16.949665] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.462 [2024-11-26 18:58:16.949824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, sta{ 00:08:50.462 "results": [ 00:08:50.462 { 00:08:50.462 "job": "raid_bdev1", 00:08:50.462 "core_mask": "0x1", 00:08:50.462 "workload": "randrw", 00:08:50.462 "percentage": 50, 00:08:50.462 "status": "finished", 00:08:50.462 "queue_depth": 1, 00:08:50.462 "io_size": 131072, 00:08:50.462 "runtime": 1.403555, 00:08:50.462 "iops": 11371.837940087848, 00:08:50.462 "mibps": 1421.479742510981, 00:08:50.462 "io_failed": 0, 00:08:50.462 "io_timeout": 0, 00:08:50.462 "avg_latency_us": 83.66140102864368, 00:08:50.462 "min_latency_us": 42.123636363636365, 00:08:50.462 "max_latency_us": 1854.370909090909 00:08:50.462 } 00:08:50.462 ], 00:08:50.462 "core_count": 1 00:08:50.462 } 00:08:50.462 te offline 00:08:50.462 18:58:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.462 18:58:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63850 00:08:50.462 18:58:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63850 ']' 00:08:50.462 18:58:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63850 00:08:50.462 18:58:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:50.462 18:58:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.462 18:58:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63850 00:08:50.462 18:58:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.462 18:58:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.462 18:58:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63850' 00:08:50.462 killing process with pid 63850 00:08:50.462 18:58:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63850 00:08:50.462 [2024-11-26 18:58:16.991640] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.462 18:58:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63850 00:08:50.721 [2024-11-26 18:58:17.123012] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:52.094 18:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.R3MI1iparQ 00:08:52.094 18:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:52.094 18:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:52.094 18:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:52.094 18:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:52.094 18:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:52.094 18:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:52.094 18:58:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:52.094 00:08:52.094 real 0m4.700s 00:08:52.094 user 0m5.768s 00:08:52.094 sys 0m0.657s 00:08:52.094 ************************************ 00:08:52.094 END TEST raid_read_error_test 00:08:52.094 ************************************ 00:08:52.094 18:58:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.094 18:58:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.094 18:58:18 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:52.094 18:58:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:52.094 18:58:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.094 18:58:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:52.094 ************************************ 00:08:52.094 START TEST raid_write_error_test 00:08:52.094 ************************************ 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hHWldWmash 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63996 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63996 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63996 ']' 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.094 18:58:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.094 [2024-11-26 18:58:18.498494] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:08:52.094 [2024-11-26 18:58:18.499412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63996 ] 00:08:52.094 [2024-11-26 18:58:18.675206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.353 [2024-11-26 18:58:18.828140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.611 [2024-11-26 18:58:19.061389] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.611 [2024-11-26 18:58:19.061660] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.177 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.177 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:53.177 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:53.177 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:53.177 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.177 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.177 BaseBdev1_malloc 00:08:53.177 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.177 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:53.177 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.177 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.177 true 00:08:53.177 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.177 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:53.177 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.177 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.177 [2024-11-26 18:58:19.571818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:53.177 [2024-11-26 18:58:19.571897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.177 [2024-11-26 18:58:19.571943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:53.177 [2024-11-26 18:58:19.571970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.177 [2024-11-26 18:58:19.575182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.177 [2024-11-26 18:58:19.575242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:53.177 BaseBdev1 00:08:53.177 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.177 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:53.177 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:53.177 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.177 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.177 BaseBdev2_malloc 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.178 true 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.178 [2024-11-26 18:58:19.632400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:53.178 [2024-11-26 18:58:19.632510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.178 [2024-11-26 18:58:19.632548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:53.178 [2024-11-26 18:58:19.632575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.178 [2024-11-26 18:58:19.635681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.178 [2024-11-26 18:58:19.635739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:53.178 BaseBdev2 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.178 [2024-11-26 18:58:19.640512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.178 [2024-11-26 18:58:19.643113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.178 [2024-11-26 18:58:19.643408] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:53.178 [2024-11-26 18:58:19.643435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:53.178 [2024-11-26 18:58:19.643750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:53.178 [2024-11-26 18:58:19.643993] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:53.178 [2024-11-26 18:58:19.644010] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:53.178 [2024-11-26 18:58:19.644206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.178 "name": "raid_bdev1", 00:08:53.178 "uuid": "693d9705-ef3d-4845-922a-99fbdde3756a", 00:08:53.178 "strip_size_kb": 0, 00:08:53.178 "state": "online", 00:08:53.178 "raid_level": "raid1", 00:08:53.178 "superblock": true, 00:08:53.178 "num_base_bdevs": 2, 00:08:53.178 "num_base_bdevs_discovered": 2, 00:08:53.178 "num_base_bdevs_operational": 2, 00:08:53.178 "base_bdevs_list": [ 00:08:53.178 { 00:08:53.178 "name": "BaseBdev1", 00:08:53.178 "uuid": "9a682739-ee47-5d70-915b-89578bf2dbde", 00:08:53.178 "is_configured": true, 00:08:53.178 "data_offset": 2048, 00:08:53.178 "data_size": 63488 00:08:53.178 }, 00:08:53.178 { 00:08:53.178 "name": "BaseBdev2", 00:08:53.178 "uuid": "9f2fde5f-9b9a-5617-b762-f7651223a9eb", 00:08:53.178 "is_configured": true, 00:08:53.178 "data_offset": 2048, 00:08:53.178 "data_size": 63488 00:08:53.178 } 00:08:53.178 ] 00:08:53.178 }' 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.178 18:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.744 18:58:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:53.744 18:58:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:53.744 [2024-11-26 18:58:20.290208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.680 [2024-11-26 18:58:21.157824] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:54.680 [2024-11-26 18:58:21.158155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:54.680 [2024-11-26 18:58:21.158475] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.680 "name": "raid_bdev1", 00:08:54.680 "uuid": "693d9705-ef3d-4845-922a-99fbdde3756a", 00:08:54.680 "strip_size_kb": 0, 00:08:54.680 "state": "online", 00:08:54.680 "raid_level": "raid1", 00:08:54.680 "superblock": true, 00:08:54.680 "num_base_bdevs": 2, 00:08:54.680 "num_base_bdevs_discovered": 1, 00:08:54.680 "num_base_bdevs_operational": 1, 00:08:54.680 "base_bdevs_list": [ 00:08:54.680 { 00:08:54.680 "name": null, 00:08:54.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.680 "is_configured": false, 00:08:54.680 "data_offset": 0, 00:08:54.680 "data_size": 63488 00:08:54.680 }, 00:08:54.680 { 00:08:54.680 "name": "BaseBdev2", 00:08:54.680 "uuid": "9f2fde5f-9b9a-5617-b762-f7651223a9eb", 00:08:54.680 "is_configured": true, 00:08:54.680 "data_offset": 2048, 00:08:54.680 "data_size": 63488 00:08:54.680 } 00:08:54.680 ] 00:08:54.680 }' 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.680 18:58:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.249 18:58:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:55.249 18:58:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.249 18:58:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.249 [2024-11-26 18:58:21.746771] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.249 [2024-11-26 18:58:21.746838] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.249 [2024-11-26 18:58:21.750258] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.249 [2024-11-26 18:58:21.750480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.249 [2024-11-26 18:58:21.750587] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.249 [2024-11-26 18:58:21.750609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:55.249 { 00:08:55.249 "results": [ 00:08:55.249 { 00:08:55.249 "job": "raid_bdev1", 00:08:55.249 "core_mask": "0x1", 00:08:55.249 "workload": "randrw", 00:08:55.249 "percentage": 50, 00:08:55.249 "status": "finished", 00:08:55.249 "queue_depth": 1, 00:08:55.249 "io_size": 131072, 00:08:55.249 "runtime": 1.453751, 00:08:55.249 "iops": 12610.137499475495, 00:08:55.249 "mibps": 1576.267187434437, 00:08:55.249 "io_failed": 0, 00:08:55.249 "io_timeout": 0, 00:08:55.249 "avg_latency_us": 74.70361375042151, 00:08:55.249 "min_latency_us": 41.89090909090909, 00:08:55.249 "max_latency_us": 1854.370909090909 00:08:55.249 } 00:08:55.249 ], 00:08:55.249 "core_count": 1 00:08:55.249 } 00:08:55.249 18:58:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.249 18:58:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63996 00:08:55.249 18:58:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63996 ']' 00:08:55.249 18:58:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63996 00:08:55.249 18:58:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:55.249 18:58:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.249 18:58:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63996 00:08:55.249 killing process with pid 63996 00:08:55.249 18:58:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:55.249 18:58:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:55.249 18:58:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63996' 00:08:55.249 18:58:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63996 00:08:55.249 [2024-11-26 18:58:21.787812] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:55.249 18:58:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63996 00:08:55.508 [2024-11-26 18:58:21.921761] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.883 18:58:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hHWldWmash 00:08:56.883 18:58:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:56.883 18:58:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:56.883 ************************************ 00:08:56.883 END TEST raid_write_error_test 00:08:56.883 ************************************ 00:08:56.883 18:58:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:56.883 18:58:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:56.883 18:58:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:56.883 18:58:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:56.883 18:58:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:56.883 00:08:56.883 real 0m4.763s 00:08:56.883 user 0m5.913s 00:08:56.883 sys 0m0.650s 00:08:56.883 18:58:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.883 18:58:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.883 18:58:23 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:56.883 18:58:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:56.883 18:58:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:56.883 18:58:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:56.883 18:58:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.883 18:58:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.883 ************************************ 00:08:56.883 START TEST raid_state_function_test 00:08:56.883 ************************************ 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:56.883 Process raid pid: 64141 00:08:56.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64141 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64141' 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64141 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64141 ']' 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.883 18:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.883 [2024-11-26 18:58:23.328361] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:08:56.883 [2024-11-26 18:58:23.328714] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.141 [2024-11-26 18:58:23.513222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.141 [2024-11-26 18:58:23.687185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.399 [2024-11-26 18:58:23.920938] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.399 [2024-11-26 18:58:23.921231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.966 [2024-11-26 18:58:24.370459] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:57.966 [2024-11-26 18:58:24.370664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:57.966 [2024-11-26 18:58:24.370833] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.966 [2024-11-26 18:58:24.370979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.966 [2024-11-26 18:58:24.371123] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:57.966 [2024-11-26 18:58:24.371183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.966 "name": "Existed_Raid", 00:08:57.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.966 "strip_size_kb": 64, 00:08:57.966 "state": "configuring", 00:08:57.966 "raid_level": "raid0", 00:08:57.966 "superblock": false, 00:08:57.966 "num_base_bdevs": 3, 00:08:57.966 "num_base_bdevs_discovered": 0, 00:08:57.966 "num_base_bdevs_operational": 3, 00:08:57.966 "base_bdevs_list": [ 00:08:57.966 { 00:08:57.966 "name": "BaseBdev1", 00:08:57.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.966 "is_configured": false, 00:08:57.966 "data_offset": 0, 00:08:57.966 "data_size": 0 00:08:57.966 }, 00:08:57.966 { 00:08:57.966 "name": "BaseBdev2", 00:08:57.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.966 "is_configured": false, 00:08:57.966 "data_offset": 0, 00:08:57.966 "data_size": 0 00:08:57.966 }, 00:08:57.966 { 00:08:57.966 "name": "BaseBdev3", 00:08:57.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.966 "is_configured": false, 00:08:57.966 "data_offset": 0, 00:08:57.966 "data_size": 0 00:08:57.966 } 00:08:57.966 ] 00:08:57.966 }' 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.966 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.534 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.535 [2024-11-26 18:58:24.874555] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.535 [2024-11-26 18:58:24.874602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.535 [2024-11-26 18:58:24.882537] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.535 [2024-11-26 18:58:24.882595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.535 [2024-11-26 18:58:24.882611] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.535 [2024-11-26 18:58:24.882627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.535 [2024-11-26 18:58:24.882637] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:58.535 [2024-11-26 18:58:24.882652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.535 [2024-11-26 18:58:24.932067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.535 BaseBdev1 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.535 [ 00:08:58.535 { 00:08:58.535 "name": "BaseBdev1", 00:08:58.535 "aliases": [ 00:08:58.535 "d207b108-666d-43c8-89cf-deddab3c9546" 00:08:58.535 ], 00:08:58.535 "product_name": "Malloc disk", 00:08:58.535 "block_size": 512, 00:08:58.535 "num_blocks": 65536, 00:08:58.535 "uuid": "d207b108-666d-43c8-89cf-deddab3c9546", 00:08:58.535 "assigned_rate_limits": { 00:08:58.535 "rw_ios_per_sec": 0, 00:08:58.535 "rw_mbytes_per_sec": 0, 00:08:58.535 "r_mbytes_per_sec": 0, 00:08:58.535 "w_mbytes_per_sec": 0 00:08:58.535 }, 00:08:58.535 "claimed": true, 00:08:58.535 "claim_type": "exclusive_write", 00:08:58.535 "zoned": false, 00:08:58.535 "supported_io_types": { 00:08:58.535 "read": true, 00:08:58.535 "write": true, 00:08:58.535 "unmap": true, 00:08:58.535 "flush": true, 00:08:58.535 "reset": true, 00:08:58.535 "nvme_admin": false, 00:08:58.535 "nvme_io": false, 00:08:58.535 "nvme_io_md": false, 00:08:58.535 "write_zeroes": true, 00:08:58.535 "zcopy": true, 00:08:58.535 "get_zone_info": false, 00:08:58.535 "zone_management": false, 00:08:58.535 "zone_append": false, 00:08:58.535 "compare": false, 00:08:58.535 "compare_and_write": false, 00:08:58.535 "abort": true, 00:08:58.535 "seek_hole": false, 00:08:58.535 "seek_data": false, 00:08:58.535 "copy": true, 00:08:58.535 "nvme_iov_md": false 00:08:58.535 }, 00:08:58.535 "memory_domains": [ 00:08:58.535 { 00:08:58.535 "dma_device_id": "system", 00:08:58.535 "dma_device_type": 1 00:08:58.535 }, 00:08:58.535 { 00:08:58.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.535 "dma_device_type": 2 00:08:58.535 } 00:08:58.535 ], 00:08:58.535 "driver_specific": {} 00:08:58.535 } 00:08:58.535 ] 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.535 18:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.535 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.535 "name": "Existed_Raid", 00:08:58.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.535 "strip_size_kb": 64, 00:08:58.535 "state": "configuring", 00:08:58.535 "raid_level": "raid0", 00:08:58.535 "superblock": false, 00:08:58.535 "num_base_bdevs": 3, 00:08:58.535 "num_base_bdevs_discovered": 1, 00:08:58.535 "num_base_bdevs_operational": 3, 00:08:58.535 "base_bdevs_list": [ 00:08:58.535 { 00:08:58.535 "name": "BaseBdev1", 00:08:58.535 "uuid": "d207b108-666d-43c8-89cf-deddab3c9546", 00:08:58.535 "is_configured": true, 00:08:58.535 "data_offset": 0, 00:08:58.535 "data_size": 65536 00:08:58.535 }, 00:08:58.535 { 00:08:58.535 "name": "BaseBdev2", 00:08:58.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.535 "is_configured": false, 00:08:58.535 "data_offset": 0, 00:08:58.535 "data_size": 0 00:08:58.535 }, 00:08:58.535 { 00:08:58.535 "name": "BaseBdev3", 00:08:58.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.535 "is_configured": false, 00:08:58.535 "data_offset": 0, 00:08:58.535 "data_size": 0 00:08:58.535 } 00:08:58.535 ] 00:08:58.535 }' 00:08:58.535 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.535 18:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.103 [2024-11-26 18:58:25.488288] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:59.103 [2024-11-26 18:58:25.488391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.103 [2024-11-26 18:58:25.496343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.103 [2024-11-26 18:58:25.498929] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:59.103 [2024-11-26 18:58:25.499020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:59.103 [2024-11-26 18:58:25.499038] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:59.103 [2024-11-26 18:58:25.499054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.103 "name": "Existed_Raid", 00:08:59.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.103 "strip_size_kb": 64, 00:08:59.103 "state": "configuring", 00:08:59.103 "raid_level": "raid0", 00:08:59.103 "superblock": false, 00:08:59.103 "num_base_bdevs": 3, 00:08:59.103 "num_base_bdevs_discovered": 1, 00:08:59.103 "num_base_bdevs_operational": 3, 00:08:59.103 "base_bdevs_list": [ 00:08:59.103 { 00:08:59.103 "name": "BaseBdev1", 00:08:59.103 "uuid": "d207b108-666d-43c8-89cf-deddab3c9546", 00:08:59.103 "is_configured": true, 00:08:59.103 "data_offset": 0, 00:08:59.103 "data_size": 65536 00:08:59.103 }, 00:08:59.103 { 00:08:59.103 "name": "BaseBdev2", 00:08:59.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.103 "is_configured": false, 00:08:59.103 "data_offset": 0, 00:08:59.103 "data_size": 0 00:08:59.103 }, 00:08:59.103 { 00:08:59.103 "name": "BaseBdev3", 00:08:59.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.103 "is_configured": false, 00:08:59.103 "data_offset": 0, 00:08:59.103 "data_size": 0 00:08:59.103 } 00:08:59.103 ] 00:08:59.103 }' 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.103 18:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.671 [2024-11-26 18:58:26.075595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.671 BaseBdev2 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.671 [ 00:08:59.671 { 00:08:59.671 "name": "BaseBdev2", 00:08:59.671 "aliases": [ 00:08:59.671 "c424b042-6a44-4ebe-ae15-c767cd751ef6" 00:08:59.671 ], 00:08:59.671 "product_name": "Malloc disk", 00:08:59.671 "block_size": 512, 00:08:59.671 "num_blocks": 65536, 00:08:59.671 "uuid": "c424b042-6a44-4ebe-ae15-c767cd751ef6", 00:08:59.671 "assigned_rate_limits": { 00:08:59.671 "rw_ios_per_sec": 0, 00:08:59.671 "rw_mbytes_per_sec": 0, 00:08:59.671 "r_mbytes_per_sec": 0, 00:08:59.671 "w_mbytes_per_sec": 0 00:08:59.671 }, 00:08:59.671 "claimed": true, 00:08:59.671 "claim_type": "exclusive_write", 00:08:59.671 "zoned": false, 00:08:59.671 "supported_io_types": { 00:08:59.671 "read": true, 00:08:59.671 "write": true, 00:08:59.671 "unmap": true, 00:08:59.671 "flush": true, 00:08:59.671 "reset": true, 00:08:59.671 "nvme_admin": false, 00:08:59.671 "nvme_io": false, 00:08:59.671 "nvme_io_md": false, 00:08:59.671 "write_zeroes": true, 00:08:59.671 "zcopy": true, 00:08:59.671 "get_zone_info": false, 00:08:59.671 "zone_management": false, 00:08:59.671 "zone_append": false, 00:08:59.671 "compare": false, 00:08:59.671 "compare_and_write": false, 00:08:59.671 "abort": true, 00:08:59.671 "seek_hole": false, 00:08:59.671 "seek_data": false, 00:08:59.671 "copy": true, 00:08:59.671 "nvme_iov_md": false 00:08:59.671 }, 00:08:59.671 "memory_domains": [ 00:08:59.671 { 00:08:59.671 "dma_device_id": "system", 00:08:59.671 "dma_device_type": 1 00:08:59.671 }, 00:08:59.671 { 00:08:59.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.671 "dma_device_type": 2 00:08:59.671 } 00:08:59.671 ], 00:08:59.671 "driver_specific": {} 00:08:59.671 } 00:08:59.671 ] 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.671 "name": "Existed_Raid", 00:08:59.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.671 "strip_size_kb": 64, 00:08:59.671 "state": "configuring", 00:08:59.671 "raid_level": "raid0", 00:08:59.671 "superblock": false, 00:08:59.671 "num_base_bdevs": 3, 00:08:59.671 "num_base_bdevs_discovered": 2, 00:08:59.671 "num_base_bdevs_operational": 3, 00:08:59.671 "base_bdevs_list": [ 00:08:59.671 { 00:08:59.671 "name": "BaseBdev1", 00:08:59.671 "uuid": "d207b108-666d-43c8-89cf-deddab3c9546", 00:08:59.671 "is_configured": true, 00:08:59.671 "data_offset": 0, 00:08:59.671 "data_size": 65536 00:08:59.671 }, 00:08:59.671 { 00:08:59.671 "name": "BaseBdev2", 00:08:59.671 "uuid": "c424b042-6a44-4ebe-ae15-c767cd751ef6", 00:08:59.671 "is_configured": true, 00:08:59.671 "data_offset": 0, 00:08:59.671 "data_size": 65536 00:08:59.671 }, 00:08:59.671 { 00:08:59.671 "name": "BaseBdev3", 00:08:59.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.671 "is_configured": false, 00:08:59.671 "data_offset": 0, 00:08:59.671 "data_size": 0 00:08:59.671 } 00:08:59.671 ] 00:08:59.671 }' 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.671 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.239 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:00.239 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.239 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.239 [2024-11-26 18:58:26.681048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:00.239 [2024-11-26 18:58:26.681353] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:00.239 [2024-11-26 18:58:26.681391] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:00.239 [2024-11-26 18:58:26.681772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:00.239 [2024-11-26 18:58:26.682029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:00.239 [2024-11-26 18:58:26.682046] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:00.239 [2024-11-26 18:58:26.682549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.239 BaseBdev3 00:09:00.239 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.239 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:00.239 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:00.239 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.239 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:00.239 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.239 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.239 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.239 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.239 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.239 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.239 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:00.239 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.239 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.239 [ 00:09:00.239 { 00:09:00.239 "name": "BaseBdev3", 00:09:00.239 "aliases": [ 00:09:00.239 "b837d144-16c9-4c5b-a8c4-6bd4b41f8ecc" 00:09:00.239 ], 00:09:00.239 "product_name": "Malloc disk", 00:09:00.239 "block_size": 512, 00:09:00.239 "num_blocks": 65536, 00:09:00.239 "uuid": "b837d144-16c9-4c5b-a8c4-6bd4b41f8ecc", 00:09:00.239 "assigned_rate_limits": { 00:09:00.239 "rw_ios_per_sec": 0, 00:09:00.239 "rw_mbytes_per_sec": 0, 00:09:00.239 "r_mbytes_per_sec": 0, 00:09:00.239 "w_mbytes_per_sec": 0 00:09:00.239 }, 00:09:00.239 "claimed": true, 00:09:00.239 "claim_type": "exclusive_write", 00:09:00.239 "zoned": false, 00:09:00.239 "supported_io_types": { 00:09:00.239 "read": true, 00:09:00.239 "write": true, 00:09:00.239 "unmap": true, 00:09:00.239 "flush": true, 00:09:00.239 "reset": true, 00:09:00.239 "nvme_admin": false, 00:09:00.239 "nvme_io": false, 00:09:00.239 "nvme_io_md": false, 00:09:00.239 "write_zeroes": true, 00:09:00.239 "zcopy": true, 00:09:00.240 "get_zone_info": false, 00:09:00.240 "zone_management": false, 00:09:00.240 "zone_append": false, 00:09:00.240 "compare": false, 00:09:00.240 "compare_and_write": false, 00:09:00.240 "abort": true, 00:09:00.240 "seek_hole": false, 00:09:00.240 "seek_data": false, 00:09:00.240 "copy": true, 00:09:00.240 "nvme_iov_md": false 00:09:00.240 }, 00:09:00.240 "memory_domains": [ 00:09:00.240 { 00:09:00.240 "dma_device_id": "system", 00:09:00.240 "dma_device_type": 1 00:09:00.240 }, 00:09:00.240 { 00:09:00.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.240 "dma_device_type": 2 00:09:00.240 } 00:09:00.240 ], 00:09:00.240 "driver_specific": {} 00:09:00.240 } 00:09:00.240 ] 00:09:00.240 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.240 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:00.240 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:00.240 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:00.240 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:00.240 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.240 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.240 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.240 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.240 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.240 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.240 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.240 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.240 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.240 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.240 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.240 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.240 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.240 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.240 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.240 "name": "Existed_Raid", 00:09:00.240 "uuid": "e7162144-3526-457c-ae0a-9ed96ec17549", 00:09:00.240 "strip_size_kb": 64, 00:09:00.240 "state": "online", 00:09:00.240 "raid_level": "raid0", 00:09:00.240 "superblock": false, 00:09:00.240 "num_base_bdevs": 3, 00:09:00.240 "num_base_bdevs_discovered": 3, 00:09:00.240 "num_base_bdevs_operational": 3, 00:09:00.240 "base_bdevs_list": [ 00:09:00.240 { 00:09:00.240 "name": "BaseBdev1", 00:09:00.240 "uuid": "d207b108-666d-43c8-89cf-deddab3c9546", 00:09:00.240 "is_configured": true, 00:09:00.240 "data_offset": 0, 00:09:00.240 "data_size": 65536 00:09:00.240 }, 00:09:00.240 { 00:09:00.240 "name": "BaseBdev2", 00:09:00.240 "uuid": "c424b042-6a44-4ebe-ae15-c767cd751ef6", 00:09:00.240 "is_configured": true, 00:09:00.240 "data_offset": 0, 00:09:00.240 "data_size": 65536 00:09:00.240 }, 00:09:00.240 { 00:09:00.240 "name": "BaseBdev3", 00:09:00.240 "uuid": "b837d144-16c9-4c5b-a8c4-6bd4b41f8ecc", 00:09:00.240 "is_configured": true, 00:09:00.240 "data_offset": 0, 00:09:00.240 "data_size": 65536 00:09:00.240 } 00:09:00.240 ] 00:09:00.240 }' 00:09:00.240 18:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.240 18:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.807 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:00.807 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:00.807 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:00.807 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:00.807 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:00.807 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:00.807 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:00.807 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:00.807 18:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.807 18:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.807 [2024-11-26 18:58:27.241697] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.807 18:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.807 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:00.807 "name": "Existed_Raid", 00:09:00.807 "aliases": [ 00:09:00.807 "e7162144-3526-457c-ae0a-9ed96ec17549" 00:09:00.807 ], 00:09:00.807 "product_name": "Raid Volume", 00:09:00.807 "block_size": 512, 00:09:00.807 "num_blocks": 196608, 00:09:00.807 "uuid": "e7162144-3526-457c-ae0a-9ed96ec17549", 00:09:00.807 "assigned_rate_limits": { 00:09:00.807 "rw_ios_per_sec": 0, 00:09:00.807 "rw_mbytes_per_sec": 0, 00:09:00.807 "r_mbytes_per_sec": 0, 00:09:00.807 "w_mbytes_per_sec": 0 00:09:00.807 }, 00:09:00.807 "claimed": false, 00:09:00.807 "zoned": false, 00:09:00.807 "supported_io_types": { 00:09:00.807 "read": true, 00:09:00.807 "write": true, 00:09:00.807 "unmap": true, 00:09:00.807 "flush": true, 00:09:00.807 "reset": true, 00:09:00.807 "nvme_admin": false, 00:09:00.807 "nvme_io": false, 00:09:00.807 "nvme_io_md": false, 00:09:00.807 "write_zeroes": true, 00:09:00.807 "zcopy": false, 00:09:00.807 "get_zone_info": false, 00:09:00.807 "zone_management": false, 00:09:00.807 "zone_append": false, 00:09:00.807 "compare": false, 00:09:00.807 "compare_and_write": false, 00:09:00.807 "abort": false, 00:09:00.807 "seek_hole": false, 00:09:00.807 "seek_data": false, 00:09:00.807 "copy": false, 00:09:00.807 "nvme_iov_md": false 00:09:00.807 }, 00:09:00.807 "memory_domains": [ 00:09:00.807 { 00:09:00.807 "dma_device_id": "system", 00:09:00.807 "dma_device_type": 1 00:09:00.807 }, 00:09:00.807 { 00:09:00.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.807 "dma_device_type": 2 00:09:00.807 }, 00:09:00.807 { 00:09:00.807 "dma_device_id": "system", 00:09:00.807 "dma_device_type": 1 00:09:00.807 }, 00:09:00.807 { 00:09:00.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.807 "dma_device_type": 2 00:09:00.807 }, 00:09:00.807 { 00:09:00.807 "dma_device_id": "system", 00:09:00.807 "dma_device_type": 1 00:09:00.807 }, 00:09:00.807 { 00:09:00.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.807 "dma_device_type": 2 00:09:00.807 } 00:09:00.807 ], 00:09:00.807 "driver_specific": { 00:09:00.807 "raid": { 00:09:00.807 "uuid": "e7162144-3526-457c-ae0a-9ed96ec17549", 00:09:00.807 "strip_size_kb": 64, 00:09:00.807 "state": "online", 00:09:00.807 "raid_level": "raid0", 00:09:00.807 "superblock": false, 00:09:00.807 "num_base_bdevs": 3, 00:09:00.807 "num_base_bdevs_discovered": 3, 00:09:00.807 "num_base_bdevs_operational": 3, 00:09:00.807 "base_bdevs_list": [ 00:09:00.807 { 00:09:00.807 "name": "BaseBdev1", 00:09:00.807 "uuid": "d207b108-666d-43c8-89cf-deddab3c9546", 00:09:00.807 "is_configured": true, 00:09:00.807 "data_offset": 0, 00:09:00.807 "data_size": 65536 00:09:00.807 }, 00:09:00.807 { 00:09:00.807 "name": "BaseBdev2", 00:09:00.807 "uuid": "c424b042-6a44-4ebe-ae15-c767cd751ef6", 00:09:00.807 "is_configured": true, 00:09:00.807 "data_offset": 0, 00:09:00.807 "data_size": 65536 00:09:00.807 }, 00:09:00.807 { 00:09:00.807 "name": "BaseBdev3", 00:09:00.807 "uuid": "b837d144-16c9-4c5b-a8c4-6bd4b41f8ecc", 00:09:00.807 "is_configured": true, 00:09:00.807 "data_offset": 0, 00:09:00.807 "data_size": 65536 00:09:00.807 } 00:09:00.807 ] 00:09:00.807 } 00:09:00.807 } 00:09:00.807 }' 00:09:00.807 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:00.807 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:00.807 BaseBdev2 00:09:00.807 BaseBdev3' 00:09:00.807 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.807 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:00.807 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.807 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:00.807 18:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.807 18:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.807 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.807 18:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.065 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.065 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.065 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.065 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:01.065 18:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.066 [2024-11-26 18:58:27.549398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:01.066 [2024-11-26 18:58:27.549436] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.066 [2024-11-26 18:58:27.549516] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.066 18:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.324 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.324 "name": "Existed_Raid", 00:09:01.324 "uuid": "e7162144-3526-457c-ae0a-9ed96ec17549", 00:09:01.324 "strip_size_kb": 64, 00:09:01.324 "state": "offline", 00:09:01.324 "raid_level": "raid0", 00:09:01.324 "superblock": false, 00:09:01.324 "num_base_bdevs": 3, 00:09:01.324 "num_base_bdevs_discovered": 2, 00:09:01.324 "num_base_bdevs_operational": 2, 00:09:01.324 "base_bdevs_list": [ 00:09:01.324 { 00:09:01.324 "name": null, 00:09:01.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.324 "is_configured": false, 00:09:01.324 "data_offset": 0, 00:09:01.324 "data_size": 65536 00:09:01.324 }, 00:09:01.324 { 00:09:01.324 "name": "BaseBdev2", 00:09:01.324 "uuid": "c424b042-6a44-4ebe-ae15-c767cd751ef6", 00:09:01.324 "is_configured": true, 00:09:01.324 "data_offset": 0, 00:09:01.324 "data_size": 65536 00:09:01.324 }, 00:09:01.324 { 00:09:01.324 "name": "BaseBdev3", 00:09:01.324 "uuid": "b837d144-16c9-4c5b-a8c4-6bd4b41f8ecc", 00:09:01.324 "is_configured": true, 00:09:01.324 "data_offset": 0, 00:09:01.324 "data_size": 65536 00:09:01.324 } 00:09:01.324 ] 00:09:01.324 }' 00:09:01.324 18:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.325 18:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.582 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:01.582 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:01.582 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.582 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.582 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.582 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:01.582 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.582 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:01.582 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:01.582 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:01.582 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.582 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.582 [2024-11-26 18:58:28.190152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:01.840 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.840 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:01.840 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:01.841 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.841 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.841 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:01.841 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.841 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.841 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:01.841 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:01.841 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:01.841 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.841 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.841 [2024-11-26 18:58:28.345118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:01.841 [2024-11-26 18:58:28.345202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:01.841 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.841 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:01.841 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:01.841 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.841 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:01.841 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.841 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.841 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.100 BaseBdev2 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.100 [ 00:09:02.100 { 00:09:02.100 "name": "BaseBdev2", 00:09:02.100 "aliases": [ 00:09:02.100 "ec5335b2-d8eb-4872-9caa-308c454f86f1" 00:09:02.100 ], 00:09:02.100 "product_name": "Malloc disk", 00:09:02.100 "block_size": 512, 00:09:02.100 "num_blocks": 65536, 00:09:02.100 "uuid": "ec5335b2-d8eb-4872-9caa-308c454f86f1", 00:09:02.100 "assigned_rate_limits": { 00:09:02.100 "rw_ios_per_sec": 0, 00:09:02.100 "rw_mbytes_per_sec": 0, 00:09:02.100 "r_mbytes_per_sec": 0, 00:09:02.100 "w_mbytes_per_sec": 0 00:09:02.100 }, 00:09:02.100 "claimed": false, 00:09:02.100 "zoned": false, 00:09:02.100 "supported_io_types": { 00:09:02.100 "read": true, 00:09:02.100 "write": true, 00:09:02.100 "unmap": true, 00:09:02.100 "flush": true, 00:09:02.100 "reset": true, 00:09:02.100 "nvme_admin": false, 00:09:02.100 "nvme_io": false, 00:09:02.100 "nvme_io_md": false, 00:09:02.100 "write_zeroes": true, 00:09:02.100 "zcopy": true, 00:09:02.100 "get_zone_info": false, 00:09:02.100 "zone_management": false, 00:09:02.100 "zone_append": false, 00:09:02.100 "compare": false, 00:09:02.100 "compare_and_write": false, 00:09:02.100 "abort": true, 00:09:02.100 "seek_hole": false, 00:09:02.100 "seek_data": false, 00:09:02.100 "copy": true, 00:09:02.100 "nvme_iov_md": false 00:09:02.100 }, 00:09:02.100 "memory_domains": [ 00:09:02.100 { 00:09:02.100 "dma_device_id": "system", 00:09:02.100 "dma_device_type": 1 00:09:02.100 }, 00:09:02.100 { 00:09:02.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.100 "dma_device_type": 2 00:09:02.100 } 00:09:02.100 ], 00:09:02.100 "driver_specific": {} 00:09:02.100 } 00:09:02.100 ] 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.100 BaseBdev3 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:02.100 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.101 [ 00:09:02.101 { 00:09:02.101 "name": "BaseBdev3", 00:09:02.101 "aliases": [ 00:09:02.101 "d0004d35-9bd6-4299-8963-38c63db77fab" 00:09:02.101 ], 00:09:02.101 "product_name": "Malloc disk", 00:09:02.101 "block_size": 512, 00:09:02.101 "num_blocks": 65536, 00:09:02.101 "uuid": "d0004d35-9bd6-4299-8963-38c63db77fab", 00:09:02.101 "assigned_rate_limits": { 00:09:02.101 "rw_ios_per_sec": 0, 00:09:02.101 "rw_mbytes_per_sec": 0, 00:09:02.101 "r_mbytes_per_sec": 0, 00:09:02.101 "w_mbytes_per_sec": 0 00:09:02.101 }, 00:09:02.101 "claimed": false, 00:09:02.101 "zoned": false, 00:09:02.101 "supported_io_types": { 00:09:02.101 "read": true, 00:09:02.101 "write": true, 00:09:02.101 "unmap": true, 00:09:02.101 "flush": true, 00:09:02.101 "reset": true, 00:09:02.101 "nvme_admin": false, 00:09:02.101 "nvme_io": false, 00:09:02.101 "nvme_io_md": false, 00:09:02.101 "write_zeroes": true, 00:09:02.101 "zcopy": true, 00:09:02.101 "get_zone_info": false, 00:09:02.101 "zone_management": false, 00:09:02.101 "zone_append": false, 00:09:02.101 "compare": false, 00:09:02.101 "compare_and_write": false, 00:09:02.101 "abort": true, 00:09:02.101 "seek_hole": false, 00:09:02.101 "seek_data": false, 00:09:02.101 "copy": true, 00:09:02.101 "nvme_iov_md": false 00:09:02.101 }, 00:09:02.101 "memory_domains": [ 00:09:02.101 { 00:09:02.101 "dma_device_id": "system", 00:09:02.101 "dma_device_type": 1 00:09:02.101 }, 00:09:02.101 { 00:09:02.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.101 "dma_device_type": 2 00:09:02.101 } 00:09:02.101 ], 00:09:02.101 "driver_specific": {} 00:09:02.101 } 00:09:02.101 ] 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.101 [2024-11-26 18:58:28.650061] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:02.101 [2024-11-26 18:58:28.650249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:02.101 [2024-11-26 18:58:28.650311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.101 [2024-11-26 18:58:28.652773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.101 "name": "Existed_Raid", 00:09:02.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.101 "strip_size_kb": 64, 00:09:02.101 "state": "configuring", 00:09:02.101 "raid_level": "raid0", 00:09:02.101 "superblock": false, 00:09:02.101 "num_base_bdevs": 3, 00:09:02.101 "num_base_bdevs_discovered": 2, 00:09:02.101 "num_base_bdevs_operational": 3, 00:09:02.101 "base_bdevs_list": [ 00:09:02.101 { 00:09:02.101 "name": "BaseBdev1", 00:09:02.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.101 "is_configured": false, 00:09:02.101 "data_offset": 0, 00:09:02.101 "data_size": 0 00:09:02.101 }, 00:09:02.101 { 00:09:02.101 "name": "BaseBdev2", 00:09:02.101 "uuid": "ec5335b2-d8eb-4872-9caa-308c454f86f1", 00:09:02.101 "is_configured": true, 00:09:02.101 "data_offset": 0, 00:09:02.101 "data_size": 65536 00:09:02.101 }, 00:09:02.101 { 00:09:02.101 "name": "BaseBdev3", 00:09:02.101 "uuid": "d0004d35-9bd6-4299-8963-38c63db77fab", 00:09:02.101 "is_configured": true, 00:09:02.101 "data_offset": 0, 00:09:02.101 "data_size": 65536 00:09:02.101 } 00:09:02.101 ] 00:09:02.101 }' 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.101 18:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.680 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:02.680 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.680 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.680 [2024-11-26 18:58:29.166256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:02.680 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.680 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:02.680 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.680 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.680 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.680 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.680 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.680 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.680 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.680 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.680 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.680 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.680 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.680 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.680 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.680 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.680 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.680 "name": "Existed_Raid", 00:09:02.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.680 "strip_size_kb": 64, 00:09:02.680 "state": "configuring", 00:09:02.680 "raid_level": "raid0", 00:09:02.680 "superblock": false, 00:09:02.680 "num_base_bdevs": 3, 00:09:02.680 "num_base_bdevs_discovered": 1, 00:09:02.680 "num_base_bdevs_operational": 3, 00:09:02.681 "base_bdevs_list": [ 00:09:02.681 { 00:09:02.681 "name": "BaseBdev1", 00:09:02.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.681 "is_configured": false, 00:09:02.681 "data_offset": 0, 00:09:02.681 "data_size": 0 00:09:02.681 }, 00:09:02.681 { 00:09:02.681 "name": null, 00:09:02.681 "uuid": "ec5335b2-d8eb-4872-9caa-308c454f86f1", 00:09:02.681 "is_configured": false, 00:09:02.681 "data_offset": 0, 00:09:02.681 "data_size": 65536 00:09:02.681 }, 00:09:02.681 { 00:09:02.681 "name": "BaseBdev3", 00:09:02.681 "uuid": "d0004d35-9bd6-4299-8963-38c63db77fab", 00:09:02.681 "is_configured": true, 00:09:02.681 "data_offset": 0, 00:09:02.681 "data_size": 65536 00:09:02.681 } 00:09:02.681 ] 00:09:02.681 }' 00:09:02.681 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.681 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.255 [2024-11-26 18:58:29.768237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.255 BaseBdev1 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.255 [ 00:09:03.255 { 00:09:03.255 "name": "BaseBdev1", 00:09:03.255 "aliases": [ 00:09:03.255 "48052536-77dc-4f73-86c3-a63fd0bc7e53" 00:09:03.255 ], 00:09:03.255 "product_name": "Malloc disk", 00:09:03.255 "block_size": 512, 00:09:03.255 "num_blocks": 65536, 00:09:03.255 "uuid": "48052536-77dc-4f73-86c3-a63fd0bc7e53", 00:09:03.255 "assigned_rate_limits": { 00:09:03.255 "rw_ios_per_sec": 0, 00:09:03.255 "rw_mbytes_per_sec": 0, 00:09:03.255 "r_mbytes_per_sec": 0, 00:09:03.255 "w_mbytes_per_sec": 0 00:09:03.255 }, 00:09:03.255 "claimed": true, 00:09:03.255 "claim_type": "exclusive_write", 00:09:03.255 "zoned": false, 00:09:03.255 "supported_io_types": { 00:09:03.255 "read": true, 00:09:03.255 "write": true, 00:09:03.255 "unmap": true, 00:09:03.255 "flush": true, 00:09:03.255 "reset": true, 00:09:03.255 "nvme_admin": false, 00:09:03.255 "nvme_io": false, 00:09:03.255 "nvme_io_md": false, 00:09:03.255 "write_zeroes": true, 00:09:03.255 "zcopy": true, 00:09:03.255 "get_zone_info": false, 00:09:03.255 "zone_management": false, 00:09:03.255 "zone_append": false, 00:09:03.255 "compare": false, 00:09:03.255 "compare_and_write": false, 00:09:03.255 "abort": true, 00:09:03.255 "seek_hole": false, 00:09:03.255 "seek_data": false, 00:09:03.255 "copy": true, 00:09:03.255 "nvme_iov_md": false 00:09:03.255 }, 00:09:03.255 "memory_domains": [ 00:09:03.255 { 00:09:03.255 "dma_device_id": "system", 00:09:03.255 "dma_device_type": 1 00:09:03.255 }, 00:09:03.255 { 00:09:03.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.255 "dma_device_type": 2 00:09:03.255 } 00:09:03.255 ], 00:09:03.255 "driver_specific": {} 00:09:03.255 } 00:09:03.255 ] 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.255 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.255 "name": "Existed_Raid", 00:09:03.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.255 "strip_size_kb": 64, 00:09:03.255 "state": "configuring", 00:09:03.255 "raid_level": "raid0", 00:09:03.255 "superblock": false, 00:09:03.255 "num_base_bdevs": 3, 00:09:03.255 "num_base_bdevs_discovered": 2, 00:09:03.255 "num_base_bdevs_operational": 3, 00:09:03.255 "base_bdevs_list": [ 00:09:03.255 { 00:09:03.255 "name": "BaseBdev1", 00:09:03.255 "uuid": "48052536-77dc-4f73-86c3-a63fd0bc7e53", 00:09:03.255 "is_configured": true, 00:09:03.256 "data_offset": 0, 00:09:03.256 "data_size": 65536 00:09:03.256 }, 00:09:03.256 { 00:09:03.256 "name": null, 00:09:03.256 "uuid": "ec5335b2-d8eb-4872-9caa-308c454f86f1", 00:09:03.256 "is_configured": false, 00:09:03.256 "data_offset": 0, 00:09:03.256 "data_size": 65536 00:09:03.256 }, 00:09:03.256 { 00:09:03.256 "name": "BaseBdev3", 00:09:03.256 "uuid": "d0004d35-9bd6-4299-8963-38c63db77fab", 00:09:03.256 "is_configured": true, 00:09:03.256 "data_offset": 0, 00:09:03.256 "data_size": 65536 00:09:03.256 } 00:09:03.256 ] 00:09:03.256 }' 00:09:03.256 18:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.256 18:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.822 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:03.822 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.822 18:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.822 18:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.822 18:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.822 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:03.822 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:03.822 18:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.822 18:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.822 [2024-11-26 18:58:30.364449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:03.822 18:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.822 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:03.823 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.823 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.823 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.823 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.823 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.823 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.823 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.823 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.823 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.823 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.823 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.823 18:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.823 18:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.823 18:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.823 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.823 "name": "Existed_Raid", 00:09:03.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.823 "strip_size_kb": 64, 00:09:03.823 "state": "configuring", 00:09:03.823 "raid_level": "raid0", 00:09:03.823 "superblock": false, 00:09:03.823 "num_base_bdevs": 3, 00:09:03.823 "num_base_bdevs_discovered": 1, 00:09:03.823 "num_base_bdevs_operational": 3, 00:09:03.823 "base_bdevs_list": [ 00:09:03.823 { 00:09:03.823 "name": "BaseBdev1", 00:09:03.823 "uuid": "48052536-77dc-4f73-86c3-a63fd0bc7e53", 00:09:03.823 "is_configured": true, 00:09:03.823 "data_offset": 0, 00:09:03.823 "data_size": 65536 00:09:03.823 }, 00:09:03.823 { 00:09:03.823 "name": null, 00:09:03.823 "uuid": "ec5335b2-d8eb-4872-9caa-308c454f86f1", 00:09:03.823 "is_configured": false, 00:09:03.823 "data_offset": 0, 00:09:03.823 "data_size": 65536 00:09:03.823 }, 00:09:03.823 { 00:09:03.823 "name": null, 00:09:03.823 "uuid": "d0004d35-9bd6-4299-8963-38c63db77fab", 00:09:03.823 "is_configured": false, 00:09:03.823 "data_offset": 0, 00:09:03.823 "data_size": 65536 00:09:03.823 } 00:09:03.823 ] 00:09:03.823 }' 00:09:03.823 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.823 18:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.389 [2024-11-26 18:58:30.940608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.389 "name": "Existed_Raid", 00:09:04.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.389 "strip_size_kb": 64, 00:09:04.389 "state": "configuring", 00:09:04.389 "raid_level": "raid0", 00:09:04.389 "superblock": false, 00:09:04.389 "num_base_bdevs": 3, 00:09:04.389 "num_base_bdevs_discovered": 2, 00:09:04.389 "num_base_bdevs_operational": 3, 00:09:04.389 "base_bdevs_list": [ 00:09:04.389 { 00:09:04.389 "name": "BaseBdev1", 00:09:04.389 "uuid": "48052536-77dc-4f73-86c3-a63fd0bc7e53", 00:09:04.389 "is_configured": true, 00:09:04.389 "data_offset": 0, 00:09:04.389 "data_size": 65536 00:09:04.389 }, 00:09:04.389 { 00:09:04.389 "name": null, 00:09:04.389 "uuid": "ec5335b2-d8eb-4872-9caa-308c454f86f1", 00:09:04.389 "is_configured": false, 00:09:04.389 "data_offset": 0, 00:09:04.389 "data_size": 65536 00:09:04.389 }, 00:09:04.389 { 00:09:04.389 "name": "BaseBdev3", 00:09:04.389 "uuid": "d0004d35-9bd6-4299-8963-38c63db77fab", 00:09:04.389 "is_configured": true, 00:09:04.389 "data_offset": 0, 00:09:04.389 "data_size": 65536 00:09:04.389 } 00:09:04.389 ] 00:09:04.389 }' 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.389 18:58:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.957 18:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.957 18:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.957 18:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.957 18:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:04.957 18:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.957 18:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:04.957 18:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:04.957 18:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.957 18:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.957 [2024-11-26 18:58:31.512829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:05.215 18:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.215 18:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.215 18:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.215 18:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.215 18:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.215 18:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.215 18:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.215 18:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.215 18:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.215 18:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.215 18:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.215 18:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.215 18:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.215 18:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.215 18:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.215 18:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.215 18:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.215 "name": "Existed_Raid", 00:09:05.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.215 "strip_size_kb": 64, 00:09:05.215 "state": "configuring", 00:09:05.215 "raid_level": "raid0", 00:09:05.215 "superblock": false, 00:09:05.215 "num_base_bdevs": 3, 00:09:05.215 "num_base_bdevs_discovered": 1, 00:09:05.215 "num_base_bdevs_operational": 3, 00:09:05.215 "base_bdevs_list": [ 00:09:05.215 { 00:09:05.215 "name": null, 00:09:05.215 "uuid": "48052536-77dc-4f73-86c3-a63fd0bc7e53", 00:09:05.215 "is_configured": false, 00:09:05.215 "data_offset": 0, 00:09:05.215 "data_size": 65536 00:09:05.215 }, 00:09:05.215 { 00:09:05.215 "name": null, 00:09:05.215 "uuid": "ec5335b2-d8eb-4872-9caa-308c454f86f1", 00:09:05.215 "is_configured": false, 00:09:05.215 "data_offset": 0, 00:09:05.215 "data_size": 65536 00:09:05.215 }, 00:09:05.215 { 00:09:05.215 "name": "BaseBdev3", 00:09:05.215 "uuid": "d0004d35-9bd6-4299-8963-38c63db77fab", 00:09:05.215 "is_configured": true, 00:09:05.215 "data_offset": 0, 00:09:05.215 "data_size": 65536 00:09:05.215 } 00:09:05.215 ] 00:09:05.215 }' 00:09:05.215 18:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.215 18:58:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.782 [2024-11-26 18:58:32.174653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.782 "name": "Existed_Raid", 00:09:05.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.782 "strip_size_kb": 64, 00:09:05.782 "state": "configuring", 00:09:05.782 "raid_level": "raid0", 00:09:05.782 "superblock": false, 00:09:05.782 "num_base_bdevs": 3, 00:09:05.782 "num_base_bdevs_discovered": 2, 00:09:05.782 "num_base_bdevs_operational": 3, 00:09:05.782 "base_bdevs_list": [ 00:09:05.782 { 00:09:05.782 "name": null, 00:09:05.782 "uuid": "48052536-77dc-4f73-86c3-a63fd0bc7e53", 00:09:05.782 "is_configured": false, 00:09:05.782 "data_offset": 0, 00:09:05.782 "data_size": 65536 00:09:05.782 }, 00:09:05.782 { 00:09:05.782 "name": "BaseBdev2", 00:09:05.782 "uuid": "ec5335b2-d8eb-4872-9caa-308c454f86f1", 00:09:05.782 "is_configured": true, 00:09:05.782 "data_offset": 0, 00:09:05.782 "data_size": 65536 00:09:05.782 }, 00:09:05.782 { 00:09:05.782 "name": "BaseBdev3", 00:09:05.782 "uuid": "d0004d35-9bd6-4299-8963-38c63db77fab", 00:09:05.782 "is_configured": true, 00:09:05.782 "data_offset": 0, 00:09:05.782 "data_size": 65536 00:09:05.782 } 00:09:05.782 ] 00:09:05.782 }' 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.782 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.348 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.348 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.348 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.348 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:06.348 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.348 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:06.348 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.348 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:06.348 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.348 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.348 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.348 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 48052536-77dc-4f73-86c3-a63fd0bc7e53 00:09:06.348 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.348 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.348 [2024-11-26 18:58:32.838648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:06.348 [2024-11-26 18:58:32.838704] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:06.348 [2024-11-26 18:58:32.838720] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:06.348 [2024-11-26 18:58:32.839064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:06.349 [2024-11-26 18:58:32.839269] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:06.349 [2024-11-26 18:58:32.839285] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:06.349 NewBaseBdev 00:09:06.349 [2024-11-26 18:58:32.839623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.349 [ 00:09:06.349 { 00:09:06.349 "name": "NewBaseBdev", 00:09:06.349 "aliases": [ 00:09:06.349 "48052536-77dc-4f73-86c3-a63fd0bc7e53" 00:09:06.349 ], 00:09:06.349 "product_name": "Malloc disk", 00:09:06.349 "block_size": 512, 00:09:06.349 "num_blocks": 65536, 00:09:06.349 "uuid": "48052536-77dc-4f73-86c3-a63fd0bc7e53", 00:09:06.349 "assigned_rate_limits": { 00:09:06.349 "rw_ios_per_sec": 0, 00:09:06.349 "rw_mbytes_per_sec": 0, 00:09:06.349 "r_mbytes_per_sec": 0, 00:09:06.349 "w_mbytes_per_sec": 0 00:09:06.349 }, 00:09:06.349 "claimed": true, 00:09:06.349 "claim_type": "exclusive_write", 00:09:06.349 "zoned": false, 00:09:06.349 "supported_io_types": { 00:09:06.349 "read": true, 00:09:06.349 "write": true, 00:09:06.349 "unmap": true, 00:09:06.349 "flush": true, 00:09:06.349 "reset": true, 00:09:06.349 "nvme_admin": false, 00:09:06.349 "nvme_io": false, 00:09:06.349 "nvme_io_md": false, 00:09:06.349 "write_zeroes": true, 00:09:06.349 "zcopy": true, 00:09:06.349 "get_zone_info": false, 00:09:06.349 "zone_management": false, 00:09:06.349 "zone_append": false, 00:09:06.349 "compare": false, 00:09:06.349 "compare_and_write": false, 00:09:06.349 "abort": true, 00:09:06.349 "seek_hole": false, 00:09:06.349 "seek_data": false, 00:09:06.349 "copy": true, 00:09:06.349 "nvme_iov_md": false 00:09:06.349 }, 00:09:06.349 "memory_domains": [ 00:09:06.349 { 00:09:06.349 "dma_device_id": "system", 00:09:06.349 "dma_device_type": 1 00:09:06.349 }, 00:09:06.349 { 00:09:06.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.349 "dma_device_type": 2 00:09:06.349 } 00:09:06.349 ], 00:09:06.349 "driver_specific": {} 00:09:06.349 } 00:09:06.349 ] 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.349 "name": "Existed_Raid", 00:09:06.349 "uuid": "03f7e674-71cd-45ad-a2e7-2dd42ca90d13", 00:09:06.349 "strip_size_kb": 64, 00:09:06.349 "state": "online", 00:09:06.349 "raid_level": "raid0", 00:09:06.349 "superblock": false, 00:09:06.349 "num_base_bdevs": 3, 00:09:06.349 "num_base_bdevs_discovered": 3, 00:09:06.349 "num_base_bdevs_operational": 3, 00:09:06.349 "base_bdevs_list": [ 00:09:06.349 { 00:09:06.349 "name": "NewBaseBdev", 00:09:06.349 "uuid": "48052536-77dc-4f73-86c3-a63fd0bc7e53", 00:09:06.349 "is_configured": true, 00:09:06.349 "data_offset": 0, 00:09:06.349 "data_size": 65536 00:09:06.349 }, 00:09:06.349 { 00:09:06.349 "name": "BaseBdev2", 00:09:06.349 "uuid": "ec5335b2-d8eb-4872-9caa-308c454f86f1", 00:09:06.349 "is_configured": true, 00:09:06.349 "data_offset": 0, 00:09:06.349 "data_size": 65536 00:09:06.349 }, 00:09:06.349 { 00:09:06.349 "name": "BaseBdev3", 00:09:06.349 "uuid": "d0004d35-9bd6-4299-8963-38c63db77fab", 00:09:06.349 "is_configured": true, 00:09:06.349 "data_offset": 0, 00:09:06.349 "data_size": 65536 00:09:06.349 } 00:09:06.349 ] 00:09:06.349 }' 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.349 18:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.916 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:06.916 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:06.916 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:06.916 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:06.916 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:06.916 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:06.916 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:06.916 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:06.916 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.916 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.916 [2024-11-26 18:58:33.383246] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.916 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.916 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:06.916 "name": "Existed_Raid", 00:09:06.916 "aliases": [ 00:09:06.916 "03f7e674-71cd-45ad-a2e7-2dd42ca90d13" 00:09:06.916 ], 00:09:06.916 "product_name": "Raid Volume", 00:09:06.916 "block_size": 512, 00:09:06.916 "num_blocks": 196608, 00:09:06.916 "uuid": "03f7e674-71cd-45ad-a2e7-2dd42ca90d13", 00:09:06.916 "assigned_rate_limits": { 00:09:06.916 "rw_ios_per_sec": 0, 00:09:06.916 "rw_mbytes_per_sec": 0, 00:09:06.916 "r_mbytes_per_sec": 0, 00:09:06.916 "w_mbytes_per_sec": 0 00:09:06.916 }, 00:09:06.916 "claimed": false, 00:09:06.916 "zoned": false, 00:09:06.916 "supported_io_types": { 00:09:06.916 "read": true, 00:09:06.916 "write": true, 00:09:06.916 "unmap": true, 00:09:06.916 "flush": true, 00:09:06.916 "reset": true, 00:09:06.916 "nvme_admin": false, 00:09:06.916 "nvme_io": false, 00:09:06.916 "nvme_io_md": false, 00:09:06.916 "write_zeroes": true, 00:09:06.916 "zcopy": false, 00:09:06.916 "get_zone_info": false, 00:09:06.916 "zone_management": false, 00:09:06.916 "zone_append": false, 00:09:06.916 "compare": false, 00:09:06.916 "compare_and_write": false, 00:09:06.916 "abort": false, 00:09:06.916 "seek_hole": false, 00:09:06.916 "seek_data": false, 00:09:06.916 "copy": false, 00:09:06.916 "nvme_iov_md": false 00:09:06.916 }, 00:09:06.916 "memory_domains": [ 00:09:06.916 { 00:09:06.916 "dma_device_id": "system", 00:09:06.916 "dma_device_type": 1 00:09:06.916 }, 00:09:06.916 { 00:09:06.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.916 "dma_device_type": 2 00:09:06.916 }, 00:09:06.916 { 00:09:06.916 "dma_device_id": "system", 00:09:06.916 "dma_device_type": 1 00:09:06.916 }, 00:09:06.916 { 00:09:06.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.916 "dma_device_type": 2 00:09:06.916 }, 00:09:06.916 { 00:09:06.916 "dma_device_id": "system", 00:09:06.916 "dma_device_type": 1 00:09:06.916 }, 00:09:06.916 { 00:09:06.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.917 "dma_device_type": 2 00:09:06.917 } 00:09:06.917 ], 00:09:06.917 "driver_specific": { 00:09:06.917 "raid": { 00:09:06.917 "uuid": "03f7e674-71cd-45ad-a2e7-2dd42ca90d13", 00:09:06.917 "strip_size_kb": 64, 00:09:06.917 "state": "online", 00:09:06.917 "raid_level": "raid0", 00:09:06.917 "superblock": false, 00:09:06.917 "num_base_bdevs": 3, 00:09:06.917 "num_base_bdevs_discovered": 3, 00:09:06.917 "num_base_bdevs_operational": 3, 00:09:06.917 "base_bdevs_list": [ 00:09:06.917 { 00:09:06.917 "name": "NewBaseBdev", 00:09:06.917 "uuid": "48052536-77dc-4f73-86c3-a63fd0bc7e53", 00:09:06.917 "is_configured": true, 00:09:06.917 "data_offset": 0, 00:09:06.917 "data_size": 65536 00:09:06.917 }, 00:09:06.917 { 00:09:06.917 "name": "BaseBdev2", 00:09:06.917 "uuid": "ec5335b2-d8eb-4872-9caa-308c454f86f1", 00:09:06.917 "is_configured": true, 00:09:06.917 "data_offset": 0, 00:09:06.917 "data_size": 65536 00:09:06.917 }, 00:09:06.917 { 00:09:06.917 "name": "BaseBdev3", 00:09:06.917 "uuid": "d0004d35-9bd6-4299-8963-38c63db77fab", 00:09:06.917 "is_configured": true, 00:09:06.917 "data_offset": 0, 00:09:06.917 "data_size": 65536 00:09:06.917 } 00:09:06.917 ] 00:09:06.917 } 00:09:06.917 } 00:09:06.917 }' 00:09:06.917 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:06.917 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:06.917 BaseBdev2 00:09:06.917 BaseBdev3' 00:09:06.917 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.917 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:06.917 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.176 [2024-11-26 18:58:33.698905] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.176 [2024-11-26 18:58:33.698940] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.176 [2024-11-26 18:58:33.699040] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.176 [2024-11-26 18:58:33.699120] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.176 [2024-11-26 18:58:33.699140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64141 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64141 ']' 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64141 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64141 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64141' 00:09:07.176 killing process with pid 64141 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64141 00:09:07.176 [2024-11-26 18:58:33.741159] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:07.176 18:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64141 00:09:07.435 [2024-11-26 18:58:34.024644] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:08.809 00:09:08.809 real 0m11.990s 00:09:08.809 user 0m19.759s 00:09:08.809 sys 0m1.634s 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.809 ************************************ 00:09:08.809 END TEST raid_state_function_test 00:09:08.809 ************************************ 00:09:08.809 18:58:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:08.809 18:58:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:08.809 18:58:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.809 18:58:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:08.809 ************************************ 00:09:08.809 START TEST raid_state_function_test_sb 00:09:08.809 ************************************ 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:08.809 Process raid pid: 64783 00:09:08.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64783 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64783' 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64783 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64783 ']' 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.809 18:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.810 18:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.810 18:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.810 [2024-11-26 18:58:35.367596] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:09:08.810 [2024-11-26 18:58:35.368005] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.083 [2024-11-26 18:58:35.554017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.083 [2024-11-26 18:58:35.698161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.341 [2024-11-26 18:58:35.927690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.341 [2024-11-26 18:58:35.927949] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.906 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.906 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:09.906 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.906 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.906 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.906 [2024-11-26 18:58:36.362819] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.906 [2024-11-26 18:58:36.362891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.906 [2024-11-26 18:58:36.362909] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.906 [2024-11-26 18:58:36.362926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.906 [2024-11-26 18:58:36.362935] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.906 [2024-11-26 18:58:36.362949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.906 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.906 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:09.906 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.907 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.907 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.907 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.907 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.907 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.907 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.907 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.907 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.907 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.907 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.907 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.907 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.907 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.907 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.907 "name": "Existed_Raid", 00:09:09.907 "uuid": "3318c63a-964d-4145-a564-8aa17a39832e", 00:09:09.907 "strip_size_kb": 64, 00:09:09.907 "state": "configuring", 00:09:09.907 "raid_level": "raid0", 00:09:09.907 "superblock": true, 00:09:09.907 "num_base_bdevs": 3, 00:09:09.907 "num_base_bdevs_discovered": 0, 00:09:09.907 "num_base_bdevs_operational": 3, 00:09:09.907 "base_bdevs_list": [ 00:09:09.907 { 00:09:09.907 "name": "BaseBdev1", 00:09:09.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.907 "is_configured": false, 00:09:09.907 "data_offset": 0, 00:09:09.907 "data_size": 0 00:09:09.907 }, 00:09:09.907 { 00:09:09.907 "name": "BaseBdev2", 00:09:09.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.907 "is_configured": false, 00:09:09.907 "data_offset": 0, 00:09:09.907 "data_size": 0 00:09:09.907 }, 00:09:09.907 { 00:09:09.907 "name": "BaseBdev3", 00:09:09.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.907 "is_configured": false, 00:09:09.907 "data_offset": 0, 00:09:09.907 "data_size": 0 00:09:09.907 } 00:09:09.907 ] 00:09:09.907 }' 00:09:09.907 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.907 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.473 [2024-11-26 18:58:36.870872] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:10.473 [2024-11-26 18:58:36.871071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.473 [2024-11-26 18:58:36.878851] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:10.473 [2024-11-26 18:58:36.879030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:10.473 [2024-11-26 18:58:36.879146] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:10.473 [2024-11-26 18:58:36.879273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:10.473 [2024-11-26 18:58:36.879397] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:10.473 [2024-11-26 18:58:36.879538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.473 [2024-11-26 18:58:36.928754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.473 BaseBdev1 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.473 [ 00:09:10.473 { 00:09:10.473 "name": "BaseBdev1", 00:09:10.473 "aliases": [ 00:09:10.473 "e618ecdf-a892-4c61-848f-e473c98f6764" 00:09:10.473 ], 00:09:10.473 "product_name": "Malloc disk", 00:09:10.473 "block_size": 512, 00:09:10.473 "num_blocks": 65536, 00:09:10.473 "uuid": "e618ecdf-a892-4c61-848f-e473c98f6764", 00:09:10.473 "assigned_rate_limits": { 00:09:10.473 "rw_ios_per_sec": 0, 00:09:10.473 "rw_mbytes_per_sec": 0, 00:09:10.473 "r_mbytes_per_sec": 0, 00:09:10.473 "w_mbytes_per_sec": 0 00:09:10.473 }, 00:09:10.473 "claimed": true, 00:09:10.473 "claim_type": "exclusive_write", 00:09:10.473 "zoned": false, 00:09:10.473 "supported_io_types": { 00:09:10.473 "read": true, 00:09:10.473 "write": true, 00:09:10.473 "unmap": true, 00:09:10.473 "flush": true, 00:09:10.473 "reset": true, 00:09:10.473 "nvme_admin": false, 00:09:10.473 "nvme_io": false, 00:09:10.473 "nvme_io_md": false, 00:09:10.473 "write_zeroes": true, 00:09:10.473 "zcopy": true, 00:09:10.473 "get_zone_info": false, 00:09:10.473 "zone_management": false, 00:09:10.473 "zone_append": false, 00:09:10.473 "compare": false, 00:09:10.473 "compare_and_write": false, 00:09:10.473 "abort": true, 00:09:10.473 "seek_hole": false, 00:09:10.473 "seek_data": false, 00:09:10.473 "copy": true, 00:09:10.473 "nvme_iov_md": false 00:09:10.473 }, 00:09:10.473 "memory_domains": [ 00:09:10.473 { 00:09:10.473 "dma_device_id": "system", 00:09:10.473 "dma_device_type": 1 00:09:10.473 }, 00:09:10.473 { 00:09:10.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.473 "dma_device_type": 2 00:09:10.473 } 00:09:10.473 ], 00:09:10.473 "driver_specific": {} 00:09:10.473 } 00:09:10.473 ] 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.473 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.474 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.474 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.474 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.474 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.474 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.474 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.474 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.474 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.474 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.474 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.474 18:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.474 18:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.474 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.474 "name": "Existed_Raid", 00:09:10.474 "uuid": "7190a112-7d69-4816-8496-0f3d0644c10e", 00:09:10.474 "strip_size_kb": 64, 00:09:10.474 "state": "configuring", 00:09:10.474 "raid_level": "raid0", 00:09:10.474 "superblock": true, 00:09:10.474 "num_base_bdevs": 3, 00:09:10.474 "num_base_bdevs_discovered": 1, 00:09:10.474 "num_base_bdevs_operational": 3, 00:09:10.474 "base_bdevs_list": [ 00:09:10.474 { 00:09:10.474 "name": "BaseBdev1", 00:09:10.474 "uuid": "e618ecdf-a892-4c61-848f-e473c98f6764", 00:09:10.474 "is_configured": true, 00:09:10.474 "data_offset": 2048, 00:09:10.474 "data_size": 63488 00:09:10.474 }, 00:09:10.474 { 00:09:10.474 "name": "BaseBdev2", 00:09:10.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.474 "is_configured": false, 00:09:10.474 "data_offset": 0, 00:09:10.474 "data_size": 0 00:09:10.474 }, 00:09:10.474 { 00:09:10.474 "name": "BaseBdev3", 00:09:10.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.474 "is_configured": false, 00:09:10.474 "data_offset": 0, 00:09:10.474 "data_size": 0 00:09:10.474 } 00:09:10.474 ] 00:09:10.474 }' 00:09:10.474 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.474 18:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.072 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:11.072 18:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.072 18:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.072 [2024-11-26 18:58:37.488952] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:11.072 [2024-11-26 18:58:37.489027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:11.072 18:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.072 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:11.072 18:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.073 18:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.073 [2024-11-26 18:58:37.501032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:11.073 [2024-11-26 18:58:37.503751] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:11.073 [2024-11-26 18:58:37.503956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:11.073 [2024-11-26 18:58:37.503996] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:11.073 [2024-11-26 18:58:37.504023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:11.073 18:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.073 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:11.073 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:11.073 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:11.073 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.073 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.073 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.073 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.073 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.073 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.073 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.073 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.073 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.073 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.073 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.073 18:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.073 18:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.073 18:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.073 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.073 "name": "Existed_Raid", 00:09:11.073 "uuid": "1e3b0fd9-1edb-4068-ba29-3115cf074905", 00:09:11.073 "strip_size_kb": 64, 00:09:11.073 "state": "configuring", 00:09:11.073 "raid_level": "raid0", 00:09:11.073 "superblock": true, 00:09:11.073 "num_base_bdevs": 3, 00:09:11.073 "num_base_bdevs_discovered": 1, 00:09:11.073 "num_base_bdevs_operational": 3, 00:09:11.073 "base_bdevs_list": [ 00:09:11.073 { 00:09:11.073 "name": "BaseBdev1", 00:09:11.073 "uuid": "e618ecdf-a892-4c61-848f-e473c98f6764", 00:09:11.073 "is_configured": true, 00:09:11.073 "data_offset": 2048, 00:09:11.073 "data_size": 63488 00:09:11.073 }, 00:09:11.073 { 00:09:11.073 "name": "BaseBdev2", 00:09:11.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.073 "is_configured": false, 00:09:11.073 "data_offset": 0, 00:09:11.073 "data_size": 0 00:09:11.073 }, 00:09:11.073 { 00:09:11.073 "name": "BaseBdev3", 00:09:11.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.073 "is_configured": false, 00:09:11.073 "data_offset": 0, 00:09:11.073 "data_size": 0 00:09:11.073 } 00:09:11.073 ] 00:09:11.073 }' 00:09:11.073 18:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.073 18:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.683 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:11.683 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.683 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.683 [2024-11-26 18:58:38.052311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.683 BaseBdev2 00:09:11.683 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.683 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:11.683 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:11.683 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:11.683 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:11.683 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:11.683 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:11.683 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:11.683 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.683 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.683 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.683 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:11.683 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.683 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.683 [ 00:09:11.683 { 00:09:11.683 "name": "BaseBdev2", 00:09:11.683 "aliases": [ 00:09:11.683 "f3092541-bf1a-46bf-954d-49146b9f6dbe" 00:09:11.683 ], 00:09:11.683 "product_name": "Malloc disk", 00:09:11.683 "block_size": 512, 00:09:11.683 "num_blocks": 65536, 00:09:11.683 "uuid": "f3092541-bf1a-46bf-954d-49146b9f6dbe", 00:09:11.683 "assigned_rate_limits": { 00:09:11.683 "rw_ios_per_sec": 0, 00:09:11.683 "rw_mbytes_per_sec": 0, 00:09:11.683 "r_mbytes_per_sec": 0, 00:09:11.683 "w_mbytes_per_sec": 0 00:09:11.683 }, 00:09:11.683 "claimed": true, 00:09:11.683 "claim_type": "exclusive_write", 00:09:11.683 "zoned": false, 00:09:11.683 "supported_io_types": { 00:09:11.683 "read": true, 00:09:11.683 "write": true, 00:09:11.683 "unmap": true, 00:09:11.683 "flush": true, 00:09:11.683 "reset": true, 00:09:11.683 "nvme_admin": false, 00:09:11.683 "nvme_io": false, 00:09:11.683 "nvme_io_md": false, 00:09:11.683 "write_zeroes": true, 00:09:11.683 "zcopy": true, 00:09:11.683 "get_zone_info": false, 00:09:11.683 "zone_management": false, 00:09:11.683 "zone_append": false, 00:09:11.683 "compare": false, 00:09:11.683 "compare_and_write": false, 00:09:11.683 "abort": true, 00:09:11.683 "seek_hole": false, 00:09:11.683 "seek_data": false, 00:09:11.683 "copy": true, 00:09:11.683 "nvme_iov_md": false 00:09:11.683 }, 00:09:11.683 "memory_domains": [ 00:09:11.683 { 00:09:11.683 "dma_device_id": "system", 00:09:11.683 "dma_device_type": 1 00:09:11.683 }, 00:09:11.683 { 00:09:11.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.683 "dma_device_type": 2 00:09:11.683 } 00:09:11.683 ], 00:09:11.683 "driver_specific": {} 00:09:11.683 } 00:09:11.683 ] 00:09:11.684 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.684 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:11.684 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:11.684 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:11.684 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:11.684 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.684 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.684 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.684 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.684 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.684 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.684 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.684 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.684 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.684 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.684 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.684 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.684 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.684 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.684 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.684 "name": "Existed_Raid", 00:09:11.684 "uuid": "1e3b0fd9-1edb-4068-ba29-3115cf074905", 00:09:11.684 "strip_size_kb": 64, 00:09:11.684 "state": "configuring", 00:09:11.684 "raid_level": "raid0", 00:09:11.684 "superblock": true, 00:09:11.684 "num_base_bdevs": 3, 00:09:11.684 "num_base_bdevs_discovered": 2, 00:09:11.684 "num_base_bdevs_operational": 3, 00:09:11.684 "base_bdevs_list": [ 00:09:11.684 { 00:09:11.684 "name": "BaseBdev1", 00:09:11.684 "uuid": "e618ecdf-a892-4c61-848f-e473c98f6764", 00:09:11.684 "is_configured": true, 00:09:11.684 "data_offset": 2048, 00:09:11.684 "data_size": 63488 00:09:11.684 }, 00:09:11.684 { 00:09:11.684 "name": "BaseBdev2", 00:09:11.684 "uuid": "f3092541-bf1a-46bf-954d-49146b9f6dbe", 00:09:11.684 "is_configured": true, 00:09:11.684 "data_offset": 2048, 00:09:11.684 "data_size": 63488 00:09:11.684 }, 00:09:11.684 { 00:09:11.684 "name": "BaseBdev3", 00:09:11.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.684 "is_configured": false, 00:09:11.684 "data_offset": 0, 00:09:11.684 "data_size": 0 00:09:11.684 } 00:09:11.684 ] 00:09:11.684 }' 00:09:11.684 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.684 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.251 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:12.251 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.251 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.251 [2024-11-26 18:58:38.647591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:12.252 [2024-11-26 18:58:38.648147] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:12.252 [2024-11-26 18:58:38.648185] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:12.252 BaseBdev3 00:09:12.252 [2024-11-26 18:58:38.648742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:12.252 [2024-11-26 18:58:38.648956] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:12.252 [2024-11-26 18:58:38.648974] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:12.252 [2024-11-26 18:58:38.649176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.252 [ 00:09:12.252 { 00:09:12.252 "name": "BaseBdev3", 00:09:12.252 "aliases": [ 00:09:12.252 "3b69c088-4038-4ec5-93a0-dd51034023a6" 00:09:12.252 ], 00:09:12.252 "product_name": "Malloc disk", 00:09:12.252 "block_size": 512, 00:09:12.252 "num_blocks": 65536, 00:09:12.252 "uuid": "3b69c088-4038-4ec5-93a0-dd51034023a6", 00:09:12.252 "assigned_rate_limits": { 00:09:12.252 "rw_ios_per_sec": 0, 00:09:12.252 "rw_mbytes_per_sec": 0, 00:09:12.252 "r_mbytes_per_sec": 0, 00:09:12.252 "w_mbytes_per_sec": 0 00:09:12.252 }, 00:09:12.252 "claimed": true, 00:09:12.252 "claim_type": "exclusive_write", 00:09:12.252 "zoned": false, 00:09:12.252 "supported_io_types": { 00:09:12.252 "read": true, 00:09:12.252 "write": true, 00:09:12.252 "unmap": true, 00:09:12.252 "flush": true, 00:09:12.252 "reset": true, 00:09:12.252 "nvme_admin": false, 00:09:12.252 "nvme_io": false, 00:09:12.252 "nvme_io_md": false, 00:09:12.252 "write_zeroes": true, 00:09:12.252 "zcopy": true, 00:09:12.252 "get_zone_info": false, 00:09:12.252 "zone_management": false, 00:09:12.252 "zone_append": false, 00:09:12.252 "compare": false, 00:09:12.252 "compare_and_write": false, 00:09:12.252 "abort": true, 00:09:12.252 "seek_hole": false, 00:09:12.252 "seek_data": false, 00:09:12.252 "copy": true, 00:09:12.252 "nvme_iov_md": false 00:09:12.252 }, 00:09:12.252 "memory_domains": [ 00:09:12.252 { 00:09:12.252 "dma_device_id": "system", 00:09:12.252 "dma_device_type": 1 00:09:12.252 }, 00:09:12.252 { 00:09:12.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.252 "dma_device_type": 2 00:09:12.252 } 00:09:12.252 ], 00:09:12.252 "driver_specific": {} 00:09:12.252 } 00:09:12.252 ] 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.252 "name": "Existed_Raid", 00:09:12.252 "uuid": "1e3b0fd9-1edb-4068-ba29-3115cf074905", 00:09:12.252 "strip_size_kb": 64, 00:09:12.252 "state": "online", 00:09:12.252 "raid_level": "raid0", 00:09:12.252 "superblock": true, 00:09:12.252 "num_base_bdevs": 3, 00:09:12.252 "num_base_bdevs_discovered": 3, 00:09:12.252 "num_base_bdevs_operational": 3, 00:09:12.252 "base_bdevs_list": [ 00:09:12.252 { 00:09:12.252 "name": "BaseBdev1", 00:09:12.252 "uuid": "e618ecdf-a892-4c61-848f-e473c98f6764", 00:09:12.252 "is_configured": true, 00:09:12.252 "data_offset": 2048, 00:09:12.252 "data_size": 63488 00:09:12.252 }, 00:09:12.252 { 00:09:12.252 "name": "BaseBdev2", 00:09:12.252 "uuid": "f3092541-bf1a-46bf-954d-49146b9f6dbe", 00:09:12.252 "is_configured": true, 00:09:12.252 "data_offset": 2048, 00:09:12.252 "data_size": 63488 00:09:12.252 }, 00:09:12.252 { 00:09:12.252 "name": "BaseBdev3", 00:09:12.252 "uuid": "3b69c088-4038-4ec5-93a0-dd51034023a6", 00:09:12.252 "is_configured": true, 00:09:12.252 "data_offset": 2048, 00:09:12.252 "data_size": 63488 00:09:12.252 } 00:09:12.252 ] 00:09:12.252 }' 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.252 18:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:12.820 [2024-11-26 18:58:39.188170] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:12.820 "name": "Existed_Raid", 00:09:12.820 "aliases": [ 00:09:12.820 "1e3b0fd9-1edb-4068-ba29-3115cf074905" 00:09:12.820 ], 00:09:12.820 "product_name": "Raid Volume", 00:09:12.820 "block_size": 512, 00:09:12.820 "num_blocks": 190464, 00:09:12.820 "uuid": "1e3b0fd9-1edb-4068-ba29-3115cf074905", 00:09:12.820 "assigned_rate_limits": { 00:09:12.820 "rw_ios_per_sec": 0, 00:09:12.820 "rw_mbytes_per_sec": 0, 00:09:12.820 "r_mbytes_per_sec": 0, 00:09:12.820 "w_mbytes_per_sec": 0 00:09:12.820 }, 00:09:12.820 "claimed": false, 00:09:12.820 "zoned": false, 00:09:12.820 "supported_io_types": { 00:09:12.820 "read": true, 00:09:12.820 "write": true, 00:09:12.820 "unmap": true, 00:09:12.820 "flush": true, 00:09:12.820 "reset": true, 00:09:12.820 "nvme_admin": false, 00:09:12.820 "nvme_io": false, 00:09:12.820 "nvme_io_md": false, 00:09:12.820 "write_zeroes": true, 00:09:12.820 "zcopy": false, 00:09:12.820 "get_zone_info": false, 00:09:12.820 "zone_management": false, 00:09:12.820 "zone_append": false, 00:09:12.820 "compare": false, 00:09:12.820 "compare_and_write": false, 00:09:12.820 "abort": false, 00:09:12.820 "seek_hole": false, 00:09:12.820 "seek_data": false, 00:09:12.820 "copy": false, 00:09:12.820 "nvme_iov_md": false 00:09:12.820 }, 00:09:12.820 "memory_domains": [ 00:09:12.820 { 00:09:12.820 "dma_device_id": "system", 00:09:12.820 "dma_device_type": 1 00:09:12.820 }, 00:09:12.820 { 00:09:12.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.820 "dma_device_type": 2 00:09:12.820 }, 00:09:12.820 { 00:09:12.820 "dma_device_id": "system", 00:09:12.820 "dma_device_type": 1 00:09:12.820 }, 00:09:12.820 { 00:09:12.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.820 "dma_device_type": 2 00:09:12.820 }, 00:09:12.820 { 00:09:12.820 "dma_device_id": "system", 00:09:12.820 "dma_device_type": 1 00:09:12.820 }, 00:09:12.820 { 00:09:12.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.820 "dma_device_type": 2 00:09:12.820 } 00:09:12.820 ], 00:09:12.820 "driver_specific": { 00:09:12.820 "raid": { 00:09:12.820 "uuid": "1e3b0fd9-1edb-4068-ba29-3115cf074905", 00:09:12.820 "strip_size_kb": 64, 00:09:12.820 "state": "online", 00:09:12.820 "raid_level": "raid0", 00:09:12.820 "superblock": true, 00:09:12.820 "num_base_bdevs": 3, 00:09:12.820 "num_base_bdevs_discovered": 3, 00:09:12.820 "num_base_bdevs_operational": 3, 00:09:12.820 "base_bdevs_list": [ 00:09:12.820 { 00:09:12.820 "name": "BaseBdev1", 00:09:12.820 "uuid": "e618ecdf-a892-4c61-848f-e473c98f6764", 00:09:12.820 "is_configured": true, 00:09:12.820 "data_offset": 2048, 00:09:12.820 "data_size": 63488 00:09:12.820 }, 00:09:12.820 { 00:09:12.820 "name": "BaseBdev2", 00:09:12.820 "uuid": "f3092541-bf1a-46bf-954d-49146b9f6dbe", 00:09:12.820 "is_configured": true, 00:09:12.820 "data_offset": 2048, 00:09:12.820 "data_size": 63488 00:09:12.820 }, 00:09:12.820 { 00:09:12.820 "name": "BaseBdev3", 00:09:12.820 "uuid": "3b69c088-4038-4ec5-93a0-dd51034023a6", 00:09:12.820 "is_configured": true, 00:09:12.820 "data_offset": 2048, 00:09:12.820 "data_size": 63488 00:09:12.820 } 00:09:12.820 ] 00:09:12.820 } 00:09:12.820 } 00:09:12.820 }' 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:12.820 BaseBdev2 00:09:12.820 BaseBdev3' 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.820 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.080 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:13.080 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.080 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.080 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.080 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.080 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.080 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.080 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:13.080 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.080 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.080 [2024-11-26 18:58:39.495924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:13.080 [2024-11-26 18:58:39.496087] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:13.080 [2024-11-26 18:58:39.496354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.080 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.080 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:13.080 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:13.080 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:13.080 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:13.080 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:13.080 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:13.080 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.080 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:13.080 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.080 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.081 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:13.081 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.081 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.081 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.081 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.081 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.081 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.081 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.081 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.081 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.081 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.081 "name": "Existed_Raid", 00:09:13.081 "uuid": "1e3b0fd9-1edb-4068-ba29-3115cf074905", 00:09:13.081 "strip_size_kb": 64, 00:09:13.081 "state": "offline", 00:09:13.081 "raid_level": "raid0", 00:09:13.081 "superblock": true, 00:09:13.081 "num_base_bdevs": 3, 00:09:13.081 "num_base_bdevs_discovered": 2, 00:09:13.081 "num_base_bdevs_operational": 2, 00:09:13.081 "base_bdevs_list": [ 00:09:13.081 { 00:09:13.081 "name": null, 00:09:13.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.081 "is_configured": false, 00:09:13.081 "data_offset": 0, 00:09:13.081 "data_size": 63488 00:09:13.081 }, 00:09:13.081 { 00:09:13.081 "name": "BaseBdev2", 00:09:13.081 "uuid": "f3092541-bf1a-46bf-954d-49146b9f6dbe", 00:09:13.081 "is_configured": true, 00:09:13.081 "data_offset": 2048, 00:09:13.081 "data_size": 63488 00:09:13.081 }, 00:09:13.081 { 00:09:13.081 "name": "BaseBdev3", 00:09:13.081 "uuid": "3b69c088-4038-4ec5-93a0-dd51034023a6", 00:09:13.081 "is_configured": true, 00:09:13.081 "data_offset": 2048, 00:09:13.081 "data_size": 63488 00:09:13.081 } 00:09:13.081 ] 00:09:13.081 }' 00:09:13.081 18:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.081 18:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.648 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:13.648 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:13.648 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.648 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.648 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:13.648 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.648 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.648 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:13.648 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:13.648 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:13.648 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.648 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.648 [2024-11-26 18:58:40.205514] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.907 [2024-11-26 18:58:40.364743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:13.907 [2024-11-26 18:58:40.364987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.907 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.166 BaseBdev2 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.166 [ 00:09:14.166 { 00:09:14.166 "name": "BaseBdev2", 00:09:14.166 "aliases": [ 00:09:14.166 "2f05522f-5374-4f20-b880-be71ce55e3d6" 00:09:14.166 ], 00:09:14.166 "product_name": "Malloc disk", 00:09:14.166 "block_size": 512, 00:09:14.166 "num_blocks": 65536, 00:09:14.166 "uuid": "2f05522f-5374-4f20-b880-be71ce55e3d6", 00:09:14.166 "assigned_rate_limits": { 00:09:14.166 "rw_ios_per_sec": 0, 00:09:14.166 "rw_mbytes_per_sec": 0, 00:09:14.166 "r_mbytes_per_sec": 0, 00:09:14.166 "w_mbytes_per_sec": 0 00:09:14.166 }, 00:09:14.166 "claimed": false, 00:09:14.166 "zoned": false, 00:09:14.166 "supported_io_types": { 00:09:14.166 "read": true, 00:09:14.166 "write": true, 00:09:14.166 "unmap": true, 00:09:14.166 "flush": true, 00:09:14.166 "reset": true, 00:09:14.166 "nvme_admin": false, 00:09:14.166 "nvme_io": false, 00:09:14.166 "nvme_io_md": false, 00:09:14.166 "write_zeroes": true, 00:09:14.166 "zcopy": true, 00:09:14.166 "get_zone_info": false, 00:09:14.166 "zone_management": false, 00:09:14.166 "zone_append": false, 00:09:14.166 "compare": false, 00:09:14.166 "compare_and_write": false, 00:09:14.166 "abort": true, 00:09:14.166 "seek_hole": false, 00:09:14.166 "seek_data": false, 00:09:14.166 "copy": true, 00:09:14.166 "nvme_iov_md": false 00:09:14.166 }, 00:09:14.166 "memory_domains": [ 00:09:14.166 { 00:09:14.166 "dma_device_id": "system", 00:09:14.166 "dma_device_type": 1 00:09:14.166 }, 00:09:14.166 { 00:09:14.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.166 "dma_device_type": 2 00:09:14.166 } 00:09:14.166 ], 00:09:14.166 "driver_specific": {} 00:09:14.166 } 00:09:14.166 ] 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.166 BaseBdev3 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.166 [ 00:09:14.166 { 00:09:14.166 "name": "BaseBdev3", 00:09:14.166 "aliases": [ 00:09:14.166 "66e7cca1-36c0-4eb6-ac49-711799b89b4d" 00:09:14.166 ], 00:09:14.166 "product_name": "Malloc disk", 00:09:14.166 "block_size": 512, 00:09:14.166 "num_blocks": 65536, 00:09:14.166 "uuid": "66e7cca1-36c0-4eb6-ac49-711799b89b4d", 00:09:14.166 "assigned_rate_limits": { 00:09:14.166 "rw_ios_per_sec": 0, 00:09:14.166 "rw_mbytes_per_sec": 0, 00:09:14.166 "r_mbytes_per_sec": 0, 00:09:14.166 "w_mbytes_per_sec": 0 00:09:14.166 }, 00:09:14.166 "claimed": false, 00:09:14.166 "zoned": false, 00:09:14.166 "supported_io_types": { 00:09:14.166 "read": true, 00:09:14.166 "write": true, 00:09:14.166 "unmap": true, 00:09:14.166 "flush": true, 00:09:14.166 "reset": true, 00:09:14.166 "nvme_admin": false, 00:09:14.166 "nvme_io": false, 00:09:14.166 "nvme_io_md": false, 00:09:14.166 "write_zeroes": true, 00:09:14.166 "zcopy": true, 00:09:14.166 "get_zone_info": false, 00:09:14.166 "zone_management": false, 00:09:14.166 "zone_append": false, 00:09:14.166 "compare": false, 00:09:14.166 "compare_and_write": false, 00:09:14.166 "abort": true, 00:09:14.166 "seek_hole": false, 00:09:14.166 "seek_data": false, 00:09:14.166 "copy": true, 00:09:14.166 "nvme_iov_md": false 00:09:14.166 }, 00:09:14.166 "memory_domains": [ 00:09:14.166 { 00:09:14.166 "dma_device_id": "system", 00:09:14.166 "dma_device_type": 1 00:09:14.166 }, 00:09:14.166 { 00:09:14.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.166 "dma_device_type": 2 00:09:14.166 } 00:09:14.166 ], 00:09:14.166 "driver_specific": {} 00:09:14.166 } 00:09:14.166 ] 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.166 [2024-11-26 18:58:40.681965] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.166 [2024-11-26 18:58:40.682173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.166 [2024-11-26 18:58:40.682331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.166 [2024-11-26 18:58:40.685037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.166 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.167 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.167 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.167 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.167 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.167 "name": "Existed_Raid", 00:09:14.167 "uuid": "ff548583-c3e2-4595-9430-b12f6570a18f", 00:09:14.167 "strip_size_kb": 64, 00:09:14.167 "state": "configuring", 00:09:14.167 "raid_level": "raid0", 00:09:14.167 "superblock": true, 00:09:14.167 "num_base_bdevs": 3, 00:09:14.167 "num_base_bdevs_discovered": 2, 00:09:14.167 "num_base_bdevs_operational": 3, 00:09:14.167 "base_bdevs_list": [ 00:09:14.167 { 00:09:14.167 "name": "BaseBdev1", 00:09:14.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.167 "is_configured": false, 00:09:14.167 "data_offset": 0, 00:09:14.167 "data_size": 0 00:09:14.167 }, 00:09:14.167 { 00:09:14.167 "name": "BaseBdev2", 00:09:14.167 "uuid": "2f05522f-5374-4f20-b880-be71ce55e3d6", 00:09:14.167 "is_configured": true, 00:09:14.167 "data_offset": 2048, 00:09:14.167 "data_size": 63488 00:09:14.167 }, 00:09:14.167 { 00:09:14.167 "name": "BaseBdev3", 00:09:14.167 "uuid": "66e7cca1-36c0-4eb6-ac49-711799b89b4d", 00:09:14.167 "is_configured": true, 00:09:14.167 "data_offset": 2048, 00:09:14.167 "data_size": 63488 00:09:14.167 } 00:09:14.167 ] 00:09:14.167 }' 00:09:14.167 18:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.167 18:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.733 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:14.733 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.733 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.733 [2024-11-26 18:58:41.210146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:14.733 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.733 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:14.733 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.733 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.733 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.733 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.733 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.733 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.733 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.733 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.733 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.733 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.733 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.733 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.733 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.733 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.733 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.733 "name": "Existed_Raid", 00:09:14.733 "uuid": "ff548583-c3e2-4595-9430-b12f6570a18f", 00:09:14.733 "strip_size_kb": 64, 00:09:14.733 "state": "configuring", 00:09:14.733 "raid_level": "raid0", 00:09:14.733 "superblock": true, 00:09:14.733 "num_base_bdevs": 3, 00:09:14.733 "num_base_bdevs_discovered": 1, 00:09:14.733 "num_base_bdevs_operational": 3, 00:09:14.733 "base_bdevs_list": [ 00:09:14.733 { 00:09:14.733 "name": "BaseBdev1", 00:09:14.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.733 "is_configured": false, 00:09:14.733 "data_offset": 0, 00:09:14.733 "data_size": 0 00:09:14.733 }, 00:09:14.733 { 00:09:14.734 "name": null, 00:09:14.734 "uuid": "2f05522f-5374-4f20-b880-be71ce55e3d6", 00:09:14.734 "is_configured": false, 00:09:14.734 "data_offset": 0, 00:09:14.734 "data_size": 63488 00:09:14.734 }, 00:09:14.734 { 00:09:14.734 "name": "BaseBdev3", 00:09:14.734 "uuid": "66e7cca1-36c0-4eb6-ac49-711799b89b4d", 00:09:14.734 "is_configured": true, 00:09:14.734 "data_offset": 2048, 00:09:14.734 "data_size": 63488 00:09:14.734 } 00:09:14.734 ] 00:09:14.734 }' 00:09:14.734 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.734 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.346 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.346 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.346 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.346 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:15.346 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.346 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:15.346 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:15.346 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.346 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.346 [2024-11-26 18:58:41.808656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.346 BaseBdev1 00:09:15.346 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.346 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:15.346 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:15.346 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:15.346 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:15.346 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:15.346 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:15.346 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:15.346 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.346 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.346 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.346 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:15.347 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.347 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.347 [ 00:09:15.347 { 00:09:15.347 "name": "BaseBdev1", 00:09:15.347 "aliases": [ 00:09:15.347 "7de707e4-2d3b-4470-ac11-7aed5b02626d" 00:09:15.347 ], 00:09:15.347 "product_name": "Malloc disk", 00:09:15.347 "block_size": 512, 00:09:15.347 "num_blocks": 65536, 00:09:15.347 "uuid": "7de707e4-2d3b-4470-ac11-7aed5b02626d", 00:09:15.347 "assigned_rate_limits": { 00:09:15.347 "rw_ios_per_sec": 0, 00:09:15.347 "rw_mbytes_per_sec": 0, 00:09:15.347 "r_mbytes_per_sec": 0, 00:09:15.347 "w_mbytes_per_sec": 0 00:09:15.347 }, 00:09:15.347 "claimed": true, 00:09:15.347 "claim_type": "exclusive_write", 00:09:15.347 "zoned": false, 00:09:15.347 "supported_io_types": { 00:09:15.347 "read": true, 00:09:15.347 "write": true, 00:09:15.347 "unmap": true, 00:09:15.347 "flush": true, 00:09:15.347 "reset": true, 00:09:15.347 "nvme_admin": false, 00:09:15.347 "nvme_io": false, 00:09:15.347 "nvme_io_md": false, 00:09:15.347 "write_zeroes": true, 00:09:15.347 "zcopy": true, 00:09:15.347 "get_zone_info": false, 00:09:15.347 "zone_management": false, 00:09:15.347 "zone_append": false, 00:09:15.347 "compare": false, 00:09:15.347 "compare_and_write": false, 00:09:15.347 "abort": true, 00:09:15.347 "seek_hole": false, 00:09:15.347 "seek_data": false, 00:09:15.347 "copy": true, 00:09:15.347 "nvme_iov_md": false 00:09:15.347 }, 00:09:15.347 "memory_domains": [ 00:09:15.347 { 00:09:15.347 "dma_device_id": "system", 00:09:15.347 "dma_device_type": 1 00:09:15.347 }, 00:09:15.347 { 00:09:15.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.347 "dma_device_type": 2 00:09:15.347 } 00:09:15.347 ], 00:09:15.347 "driver_specific": {} 00:09:15.347 } 00:09:15.347 ] 00:09:15.347 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.347 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:15.347 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.347 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.347 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.347 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.347 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.347 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.347 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.347 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.347 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.347 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.347 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.347 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.347 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.347 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.347 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.347 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.347 "name": "Existed_Raid", 00:09:15.347 "uuid": "ff548583-c3e2-4595-9430-b12f6570a18f", 00:09:15.347 "strip_size_kb": 64, 00:09:15.347 "state": "configuring", 00:09:15.347 "raid_level": "raid0", 00:09:15.347 "superblock": true, 00:09:15.347 "num_base_bdevs": 3, 00:09:15.347 "num_base_bdevs_discovered": 2, 00:09:15.347 "num_base_bdevs_operational": 3, 00:09:15.347 "base_bdevs_list": [ 00:09:15.347 { 00:09:15.347 "name": "BaseBdev1", 00:09:15.347 "uuid": "7de707e4-2d3b-4470-ac11-7aed5b02626d", 00:09:15.347 "is_configured": true, 00:09:15.347 "data_offset": 2048, 00:09:15.347 "data_size": 63488 00:09:15.347 }, 00:09:15.347 { 00:09:15.347 "name": null, 00:09:15.347 "uuid": "2f05522f-5374-4f20-b880-be71ce55e3d6", 00:09:15.347 "is_configured": false, 00:09:15.347 "data_offset": 0, 00:09:15.347 "data_size": 63488 00:09:15.347 }, 00:09:15.347 { 00:09:15.347 "name": "BaseBdev3", 00:09:15.347 "uuid": "66e7cca1-36c0-4eb6-ac49-711799b89b4d", 00:09:15.347 "is_configured": true, 00:09:15.347 "data_offset": 2048, 00:09:15.347 "data_size": 63488 00:09:15.347 } 00:09:15.347 ] 00:09:15.347 }' 00:09:15.347 18:58:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.347 18:58:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.914 [2024-11-26 18:58:42.396901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.914 "name": "Existed_Raid", 00:09:15.914 "uuid": "ff548583-c3e2-4595-9430-b12f6570a18f", 00:09:15.914 "strip_size_kb": 64, 00:09:15.914 "state": "configuring", 00:09:15.914 "raid_level": "raid0", 00:09:15.914 "superblock": true, 00:09:15.914 "num_base_bdevs": 3, 00:09:15.914 "num_base_bdevs_discovered": 1, 00:09:15.914 "num_base_bdevs_operational": 3, 00:09:15.914 "base_bdevs_list": [ 00:09:15.914 { 00:09:15.914 "name": "BaseBdev1", 00:09:15.914 "uuid": "7de707e4-2d3b-4470-ac11-7aed5b02626d", 00:09:15.914 "is_configured": true, 00:09:15.914 "data_offset": 2048, 00:09:15.914 "data_size": 63488 00:09:15.914 }, 00:09:15.914 { 00:09:15.914 "name": null, 00:09:15.914 "uuid": "2f05522f-5374-4f20-b880-be71ce55e3d6", 00:09:15.914 "is_configured": false, 00:09:15.914 "data_offset": 0, 00:09:15.914 "data_size": 63488 00:09:15.914 }, 00:09:15.914 { 00:09:15.914 "name": null, 00:09:15.914 "uuid": "66e7cca1-36c0-4eb6-ac49-711799b89b4d", 00:09:15.914 "is_configured": false, 00:09:15.914 "data_offset": 0, 00:09:15.914 "data_size": 63488 00:09:15.914 } 00:09:15.914 ] 00:09:15.914 }' 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.914 18:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.481 [2024-11-26 18:58:42.949064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.481 18:58:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.481 18:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.481 "name": "Existed_Raid", 00:09:16.481 "uuid": "ff548583-c3e2-4595-9430-b12f6570a18f", 00:09:16.481 "strip_size_kb": 64, 00:09:16.481 "state": "configuring", 00:09:16.481 "raid_level": "raid0", 00:09:16.481 "superblock": true, 00:09:16.481 "num_base_bdevs": 3, 00:09:16.481 "num_base_bdevs_discovered": 2, 00:09:16.481 "num_base_bdevs_operational": 3, 00:09:16.481 "base_bdevs_list": [ 00:09:16.481 { 00:09:16.481 "name": "BaseBdev1", 00:09:16.481 "uuid": "7de707e4-2d3b-4470-ac11-7aed5b02626d", 00:09:16.481 "is_configured": true, 00:09:16.481 "data_offset": 2048, 00:09:16.481 "data_size": 63488 00:09:16.481 }, 00:09:16.481 { 00:09:16.481 "name": null, 00:09:16.481 "uuid": "2f05522f-5374-4f20-b880-be71ce55e3d6", 00:09:16.481 "is_configured": false, 00:09:16.481 "data_offset": 0, 00:09:16.481 "data_size": 63488 00:09:16.481 }, 00:09:16.481 { 00:09:16.481 "name": "BaseBdev3", 00:09:16.481 "uuid": "66e7cca1-36c0-4eb6-ac49-711799b89b4d", 00:09:16.481 "is_configured": true, 00:09:16.481 "data_offset": 2048, 00:09:16.481 "data_size": 63488 00:09:16.481 } 00:09:16.481 ] 00:09:16.481 }' 00:09:16.481 18:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.481 18:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.047 [2024-11-26 18:58:43.521272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.047 18:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.305 18:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.305 "name": "Existed_Raid", 00:09:17.305 "uuid": "ff548583-c3e2-4595-9430-b12f6570a18f", 00:09:17.305 "strip_size_kb": 64, 00:09:17.305 "state": "configuring", 00:09:17.305 "raid_level": "raid0", 00:09:17.305 "superblock": true, 00:09:17.305 "num_base_bdevs": 3, 00:09:17.305 "num_base_bdevs_discovered": 1, 00:09:17.305 "num_base_bdevs_operational": 3, 00:09:17.305 "base_bdevs_list": [ 00:09:17.305 { 00:09:17.305 "name": null, 00:09:17.305 "uuid": "7de707e4-2d3b-4470-ac11-7aed5b02626d", 00:09:17.305 "is_configured": false, 00:09:17.305 "data_offset": 0, 00:09:17.305 "data_size": 63488 00:09:17.305 }, 00:09:17.305 { 00:09:17.305 "name": null, 00:09:17.305 "uuid": "2f05522f-5374-4f20-b880-be71ce55e3d6", 00:09:17.305 "is_configured": false, 00:09:17.305 "data_offset": 0, 00:09:17.305 "data_size": 63488 00:09:17.305 }, 00:09:17.305 { 00:09:17.305 "name": "BaseBdev3", 00:09:17.305 "uuid": "66e7cca1-36c0-4eb6-ac49-711799b89b4d", 00:09:17.305 "is_configured": true, 00:09:17.305 "data_offset": 2048, 00:09:17.305 "data_size": 63488 00:09:17.305 } 00:09:17.305 ] 00:09:17.305 }' 00:09:17.305 18:58:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.305 18:58:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.563 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.563 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:17.563 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.563 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.563 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.563 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:17.563 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:17.563 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.563 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.821 [2024-11-26 18:58:44.185239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.821 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.821 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:17.821 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.821 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.821 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.821 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.821 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.821 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.821 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.821 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.821 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.821 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.821 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.821 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.821 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.821 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.821 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.821 "name": "Existed_Raid", 00:09:17.821 "uuid": "ff548583-c3e2-4595-9430-b12f6570a18f", 00:09:17.821 "strip_size_kb": 64, 00:09:17.821 "state": "configuring", 00:09:17.821 "raid_level": "raid0", 00:09:17.821 "superblock": true, 00:09:17.821 "num_base_bdevs": 3, 00:09:17.821 "num_base_bdevs_discovered": 2, 00:09:17.821 "num_base_bdevs_operational": 3, 00:09:17.821 "base_bdevs_list": [ 00:09:17.821 { 00:09:17.821 "name": null, 00:09:17.821 "uuid": "7de707e4-2d3b-4470-ac11-7aed5b02626d", 00:09:17.821 "is_configured": false, 00:09:17.821 "data_offset": 0, 00:09:17.821 "data_size": 63488 00:09:17.821 }, 00:09:17.821 { 00:09:17.821 "name": "BaseBdev2", 00:09:17.821 "uuid": "2f05522f-5374-4f20-b880-be71ce55e3d6", 00:09:17.821 "is_configured": true, 00:09:17.821 "data_offset": 2048, 00:09:17.821 "data_size": 63488 00:09:17.821 }, 00:09:17.821 { 00:09:17.821 "name": "BaseBdev3", 00:09:17.821 "uuid": "66e7cca1-36c0-4eb6-ac49-711799b89b4d", 00:09:17.821 "is_configured": true, 00:09:17.821 "data_offset": 2048, 00:09:17.821 "data_size": 63488 00:09:17.821 } 00:09:17.821 ] 00:09:17.821 }' 00:09:17.821 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.821 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7de707e4-2d3b-4470-ac11-7aed5b02626d 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.389 [2024-11-26 18:58:44.884489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:18.389 NewBaseBdev 00:09:18.389 [2024-11-26 18:58:44.884937] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:18.389 [2024-11-26 18:58:44.884969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:18.389 [2024-11-26 18:58:44.885341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:18.389 [2024-11-26 18:58:44.885536] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:18.389 [2024-11-26 18:58:44.885553] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:18.389 [2024-11-26 18:58:44.885729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.389 [ 00:09:18.389 { 00:09:18.389 "name": "NewBaseBdev", 00:09:18.389 "aliases": [ 00:09:18.389 "7de707e4-2d3b-4470-ac11-7aed5b02626d" 00:09:18.389 ], 00:09:18.389 "product_name": "Malloc disk", 00:09:18.389 "block_size": 512, 00:09:18.389 "num_blocks": 65536, 00:09:18.389 "uuid": "7de707e4-2d3b-4470-ac11-7aed5b02626d", 00:09:18.389 "assigned_rate_limits": { 00:09:18.389 "rw_ios_per_sec": 0, 00:09:18.389 "rw_mbytes_per_sec": 0, 00:09:18.389 "r_mbytes_per_sec": 0, 00:09:18.389 "w_mbytes_per_sec": 0 00:09:18.389 }, 00:09:18.389 "claimed": true, 00:09:18.389 "claim_type": "exclusive_write", 00:09:18.389 "zoned": false, 00:09:18.389 "supported_io_types": { 00:09:18.389 "read": true, 00:09:18.389 "write": true, 00:09:18.389 "unmap": true, 00:09:18.389 "flush": true, 00:09:18.389 "reset": true, 00:09:18.389 "nvme_admin": false, 00:09:18.389 "nvme_io": false, 00:09:18.389 "nvme_io_md": false, 00:09:18.389 "write_zeroes": true, 00:09:18.389 "zcopy": true, 00:09:18.389 "get_zone_info": false, 00:09:18.389 "zone_management": false, 00:09:18.389 "zone_append": false, 00:09:18.389 "compare": false, 00:09:18.389 "compare_and_write": false, 00:09:18.389 "abort": true, 00:09:18.389 "seek_hole": false, 00:09:18.389 "seek_data": false, 00:09:18.389 "copy": true, 00:09:18.389 "nvme_iov_md": false 00:09:18.389 }, 00:09:18.389 "memory_domains": [ 00:09:18.389 { 00:09:18.389 "dma_device_id": "system", 00:09:18.389 "dma_device_type": 1 00:09:18.389 }, 00:09:18.389 { 00:09:18.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.389 "dma_device_type": 2 00:09:18.389 } 00:09:18.389 ], 00:09:18.389 "driver_specific": {} 00:09:18.389 } 00:09:18.389 ] 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.389 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.389 "name": "Existed_Raid", 00:09:18.389 "uuid": "ff548583-c3e2-4595-9430-b12f6570a18f", 00:09:18.389 "strip_size_kb": 64, 00:09:18.389 "state": "online", 00:09:18.389 "raid_level": "raid0", 00:09:18.389 "superblock": true, 00:09:18.389 "num_base_bdevs": 3, 00:09:18.389 "num_base_bdevs_discovered": 3, 00:09:18.389 "num_base_bdevs_operational": 3, 00:09:18.389 "base_bdevs_list": [ 00:09:18.389 { 00:09:18.389 "name": "NewBaseBdev", 00:09:18.389 "uuid": "7de707e4-2d3b-4470-ac11-7aed5b02626d", 00:09:18.389 "is_configured": true, 00:09:18.389 "data_offset": 2048, 00:09:18.389 "data_size": 63488 00:09:18.389 }, 00:09:18.389 { 00:09:18.389 "name": "BaseBdev2", 00:09:18.389 "uuid": "2f05522f-5374-4f20-b880-be71ce55e3d6", 00:09:18.389 "is_configured": true, 00:09:18.389 "data_offset": 2048, 00:09:18.389 "data_size": 63488 00:09:18.389 }, 00:09:18.389 { 00:09:18.389 "name": "BaseBdev3", 00:09:18.389 "uuid": "66e7cca1-36c0-4eb6-ac49-711799b89b4d", 00:09:18.389 "is_configured": true, 00:09:18.389 "data_offset": 2048, 00:09:18.389 "data_size": 63488 00:09:18.390 } 00:09:18.390 ] 00:09:18.390 }' 00:09:18.390 18:58:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.390 18:58:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.956 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:18.956 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:18.956 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:18.956 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:18.956 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:18.956 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:18.956 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:18.956 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.956 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.956 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:18.956 [2024-11-26 18:58:45.437101] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.956 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.956 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:18.956 "name": "Existed_Raid", 00:09:18.956 "aliases": [ 00:09:18.956 "ff548583-c3e2-4595-9430-b12f6570a18f" 00:09:18.956 ], 00:09:18.956 "product_name": "Raid Volume", 00:09:18.956 "block_size": 512, 00:09:18.956 "num_blocks": 190464, 00:09:18.956 "uuid": "ff548583-c3e2-4595-9430-b12f6570a18f", 00:09:18.956 "assigned_rate_limits": { 00:09:18.956 "rw_ios_per_sec": 0, 00:09:18.956 "rw_mbytes_per_sec": 0, 00:09:18.956 "r_mbytes_per_sec": 0, 00:09:18.956 "w_mbytes_per_sec": 0 00:09:18.956 }, 00:09:18.956 "claimed": false, 00:09:18.956 "zoned": false, 00:09:18.956 "supported_io_types": { 00:09:18.956 "read": true, 00:09:18.956 "write": true, 00:09:18.956 "unmap": true, 00:09:18.956 "flush": true, 00:09:18.956 "reset": true, 00:09:18.956 "nvme_admin": false, 00:09:18.956 "nvme_io": false, 00:09:18.956 "nvme_io_md": false, 00:09:18.956 "write_zeroes": true, 00:09:18.957 "zcopy": false, 00:09:18.957 "get_zone_info": false, 00:09:18.957 "zone_management": false, 00:09:18.957 "zone_append": false, 00:09:18.957 "compare": false, 00:09:18.957 "compare_and_write": false, 00:09:18.957 "abort": false, 00:09:18.957 "seek_hole": false, 00:09:18.957 "seek_data": false, 00:09:18.957 "copy": false, 00:09:18.957 "nvme_iov_md": false 00:09:18.957 }, 00:09:18.957 "memory_domains": [ 00:09:18.957 { 00:09:18.957 "dma_device_id": "system", 00:09:18.957 "dma_device_type": 1 00:09:18.957 }, 00:09:18.957 { 00:09:18.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.957 "dma_device_type": 2 00:09:18.957 }, 00:09:18.957 { 00:09:18.957 "dma_device_id": "system", 00:09:18.957 "dma_device_type": 1 00:09:18.957 }, 00:09:18.957 { 00:09:18.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.957 "dma_device_type": 2 00:09:18.957 }, 00:09:18.957 { 00:09:18.957 "dma_device_id": "system", 00:09:18.957 "dma_device_type": 1 00:09:18.957 }, 00:09:18.957 { 00:09:18.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.957 "dma_device_type": 2 00:09:18.957 } 00:09:18.957 ], 00:09:18.957 "driver_specific": { 00:09:18.957 "raid": { 00:09:18.957 "uuid": "ff548583-c3e2-4595-9430-b12f6570a18f", 00:09:18.957 "strip_size_kb": 64, 00:09:18.957 "state": "online", 00:09:18.957 "raid_level": "raid0", 00:09:18.957 "superblock": true, 00:09:18.957 "num_base_bdevs": 3, 00:09:18.957 "num_base_bdevs_discovered": 3, 00:09:18.957 "num_base_bdevs_operational": 3, 00:09:18.957 "base_bdevs_list": [ 00:09:18.957 { 00:09:18.957 "name": "NewBaseBdev", 00:09:18.957 "uuid": "7de707e4-2d3b-4470-ac11-7aed5b02626d", 00:09:18.957 "is_configured": true, 00:09:18.957 "data_offset": 2048, 00:09:18.957 "data_size": 63488 00:09:18.957 }, 00:09:18.957 { 00:09:18.957 "name": "BaseBdev2", 00:09:18.957 "uuid": "2f05522f-5374-4f20-b880-be71ce55e3d6", 00:09:18.957 "is_configured": true, 00:09:18.957 "data_offset": 2048, 00:09:18.957 "data_size": 63488 00:09:18.957 }, 00:09:18.957 { 00:09:18.957 "name": "BaseBdev3", 00:09:18.957 "uuid": "66e7cca1-36c0-4eb6-ac49-711799b89b4d", 00:09:18.957 "is_configured": true, 00:09:18.957 "data_offset": 2048, 00:09:18.957 "data_size": 63488 00:09:18.957 } 00:09:18.957 ] 00:09:18.957 } 00:09:18.957 } 00:09:18.957 }' 00:09:18.957 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:18.957 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:18.957 BaseBdev2 00:09:18.957 BaseBdev3' 00:09:18.957 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.957 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:18.957 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.216 [2024-11-26 18:58:45.712739] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:19.216 [2024-11-26 18:58:45.712891] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.216 [2024-11-26 18:58:45.713166] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.216 [2024-11-26 18:58:45.713363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.216 [2024-11-26 18:58:45.713422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64783 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64783 ']' 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64783 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64783 00:09:19.216 killing process with pid 64783 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64783' 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64783 00:09:19.216 [2024-11-26 18:58:45.750345] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:19.216 18:58:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64783 00:09:19.475 [2024-11-26 18:58:46.042838] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:20.849 ************************************ 00:09:20.849 END TEST raid_state_function_test_sb 00:09:20.849 ************************************ 00:09:20.849 18:58:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:20.849 00:09:20.849 real 0m11.967s 00:09:20.849 user 0m19.618s 00:09:20.849 sys 0m1.700s 00:09:20.849 18:58:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.849 18:58:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.849 18:58:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:20.849 18:58:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:20.849 18:58:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.849 18:58:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:20.849 ************************************ 00:09:20.849 START TEST raid_superblock_test 00:09:20.849 ************************************ 00:09:20.849 18:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:09:20.849 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:20.849 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:20.849 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:20.849 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:20.849 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:20.849 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:20.849 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:20.849 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:20.849 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:20.849 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:20.850 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:20.850 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:20.850 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:20.850 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:20.850 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:20.850 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:20.850 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65414 00:09:20.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.850 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65414 00:09:20.850 18:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65414 ']' 00:09:20.850 18:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:20.850 18:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.850 18:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.850 18:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.850 18:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.850 18:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.850 [2024-11-26 18:58:47.388445] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:09:20.850 [2024-11-26 18:58:47.388640] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65414 ] 00:09:21.108 [2024-11-26 18:58:47.582438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.366 [2024-11-26 18:58:47.751776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.366 [2024-11-26 18:58:47.985156] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.366 [2024-11-26 18:58:47.985231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.933 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.933 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:21.933 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:21.933 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:21.933 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:21.933 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:21.933 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:21.933 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:21.933 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:21.933 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:21.933 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:21.933 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.933 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.933 malloc1 00:09:21.933 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.933 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:21.933 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.933 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.933 [2024-11-26 18:58:48.440332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:21.933 [2024-11-26 18:58:48.440543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.933 [2024-11-26 18:58:48.440624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:21.933 [2024-11-26 18:58:48.440773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.933 [2024-11-26 18:58:48.443718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.933 [2024-11-26 18:58:48.443884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:21.933 pt1 00:09:21.933 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.933 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.934 malloc2 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.934 [2024-11-26 18:58:48.499126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:21.934 [2024-11-26 18:58:48.499201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.934 [2024-11-26 18:58:48.499242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:21.934 [2024-11-26 18:58:48.499257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.934 [2024-11-26 18:58:48.502229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.934 [2024-11-26 18:58:48.502276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:21.934 pt2 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.934 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.193 malloc3 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.193 [2024-11-26 18:58:48.570893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:22.193 [2024-11-26 18:58:48.570966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.193 [2024-11-26 18:58:48.571001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:22.193 [2024-11-26 18:58:48.571016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.193 [2024-11-26 18:58:48.573917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.193 pt3 00:09:22.193 [2024-11-26 18:58:48.574093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.193 [2024-11-26 18:58:48.579030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:22.193 [2024-11-26 18:58:48.581788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:22.193 [2024-11-26 18:58:48.582020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:22.193 [2024-11-26 18:58:48.582300] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:22.193 [2024-11-26 18:58:48.582433] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:22.193 [2024-11-26 18:58:48.582798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:22.193 [2024-11-26 18:58:48.583140] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:22.193 [2024-11-26 18:58:48.583260] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:22.193 [2024-11-26 18:58:48.583644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.193 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.194 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.194 "name": "raid_bdev1", 00:09:22.194 "uuid": "e4f59d9f-4fcd-4d6d-a0fc-6a1799746128", 00:09:22.194 "strip_size_kb": 64, 00:09:22.194 "state": "online", 00:09:22.194 "raid_level": "raid0", 00:09:22.194 "superblock": true, 00:09:22.194 "num_base_bdevs": 3, 00:09:22.194 "num_base_bdevs_discovered": 3, 00:09:22.194 "num_base_bdevs_operational": 3, 00:09:22.194 "base_bdevs_list": [ 00:09:22.194 { 00:09:22.194 "name": "pt1", 00:09:22.194 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:22.194 "is_configured": true, 00:09:22.194 "data_offset": 2048, 00:09:22.194 "data_size": 63488 00:09:22.194 }, 00:09:22.194 { 00:09:22.194 "name": "pt2", 00:09:22.194 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:22.194 "is_configured": true, 00:09:22.194 "data_offset": 2048, 00:09:22.194 "data_size": 63488 00:09:22.194 }, 00:09:22.194 { 00:09:22.194 "name": "pt3", 00:09:22.194 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:22.194 "is_configured": true, 00:09:22.194 "data_offset": 2048, 00:09:22.194 "data_size": 63488 00:09:22.194 } 00:09:22.194 ] 00:09:22.194 }' 00:09:22.194 18:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.194 18:58:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.451 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:22.451 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:22.451 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:22.451 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:22.451 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:22.451 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:22.451 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:22.451 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:22.451 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.451 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.451 [2024-11-26 18:58:49.052130] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.710 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.710 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:22.710 "name": "raid_bdev1", 00:09:22.710 "aliases": [ 00:09:22.710 "e4f59d9f-4fcd-4d6d-a0fc-6a1799746128" 00:09:22.710 ], 00:09:22.710 "product_name": "Raid Volume", 00:09:22.710 "block_size": 512, 00:09:22.710 "num_blocks": 190464, 00:09:22.710 "uuid": "e4f59d9f-4fcd-4d6d-a0fc-6a1799746128", 00:09:22.710 "assigned_rate_limits": { 00:09:22.710 "rw_ios_per_sec": 0, 00:09:22.710 "rw_mbytes_per_sec": 0, 00:09:22.710 "r_mbytes_per_sec": 0, 00:09:22.710 "w_mbytes_per_sec": 0 00:09:22.710 }, 00:09:22.710 "claimed": false, 00:09:22.710 "zoned": false, 00:09:22.710 "supported_io_types": { 00:09:22.710 "read": true, 00:09:22.710 "write": true, 00:09:22.710 "unmap": true, 00:09:22.710 "flush": true, 00:09:22.710 "reset": true, 00:09:22.710 "nvme_admin": false, 00:09:22.710 "nvme_io": false, 00:09:22.710 "nvme_io_md": false, 00:09:22.710 "write_zeroes": true, 00:09:22.710 "zcopy": false, 00:09:22.710 "get_zone_info": false, 00:09:22.710 "zone_management": false, 00:09:22.710 "zone_append": false, 00:09:22.710 "compare": false, 00:09:22.710 "compare_and_write": false, 00:09:22.710 "abort": false, 00:09:22.710 "seek_hole": false, 00:09:22.710 "seek_data": false, 00:09:22.710 "copy": false, 00:09:22.710 "nvme_iov_md": false 00:09:22.710 }, 00:09:22.710 "memory_domains": [ 00:09:22.710 { 00:09:22.710 "dma_device_id": "system", 00:09:22.710 "dma_device_type": 1 00:09:22.711 }, 00:09:22.711 { 00:09:22.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.711 "dma_device_type": 2 00:09:22.711 }, 00:09:22.711 { 00:09:22.711 "dma_device_id": "system", 00:09:22.711 "dma_device_type": 1 00:09:22.711 }, 00:09:22.711 { 00:09:22.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.711 "dma_device_type": 2 00:09:22.711 }, 00:09:22.711 { 00:09:22.711 "dma_device_id": "system", 00:09:22.711 "dma_device_type": 1 00:09:22.711 }, 00:09:22.711 { 00:09:22.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.711 "dma_device_type": 2 00:09:22.711 } 00:09:22.711 ], 00:09:22.711 "driver_specific": { 00:09:22.711 "raid": { 00:09:22.711 "uuid": "e4f59d9f-4fcd-4d6d-a0fc-6a1799746128", 00:09:22.711 "strip_size_kb": 64, 00:09:22.711 "state": "online", 00:09:22.711 "raid_level": "raid0", 00:09:22.711 "superblock": true, 00:09:22.711 "num_base_bdevs": 3, 00:09:22.711 "num_base_bdevs_discovered": 3, 00:09:22.711 "num_base_bdevs_operational": 3, 00:09:22.711 "base_bdevs_list": [ 00:09:22.711 { 00:09:22.711 "name": "pt1", 00:09:22.711 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:22.711 "is_configured": true, 00:09:22.711 "data_offset": 2048, 00:09:22.711 "data_size": 63488 00:09:22.711 }, 00:09:22.711 { 00:09:22.711 "name": "pt2", 00:09:22.711 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:22.711 "is_configured": true, 00:09:22.711 "data_offset": 2048, 00:09:22.711 "data_size": 63488 00:09:22.711 }, 00:09:22.711 { 00:09:22.711 "name": "pt3", 00:09:22.711 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:22.711 "is_configured": true, 00:09:22.711 "data_offset": 2048, 00:09:22.711 "data_size": 63488 00:09:22.711 } 00:09:22.711 ] 00:09:22.711 } 00:09:22.711 } 00:09:22.711 }' 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:22.711 pt2 00:09:22.711 pt3' 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.711 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.969 [2024-11-26 18:58:49.360134] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e4f59d9f-4fcd-4d6d-a0fc-6a1799746128 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e4f59d9f-4fcd-4d6d-a0fc-6a1799746128 ']' 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.969 [2024-11-26 18:58:49.411794] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:22.969 [2024-11-26 18:58:49.411956] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:22.969 [2024-11-26 18:58:49.412172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.969 [2024-11-26 18:58:49.412406] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.969 [2024-11-26 18:58:49.412433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.969 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.969 [2024-11-26 18:58:49.543911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:22.969 [2024-11-26 18:58:49.546684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:22.969 [2024-11-26 18:58:49.546877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:22.969 [2024-11-26 18:58:49.546969] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:22.969 [2024-11-26 18:58:49.547044] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:22.970 [2024-11-26 18:58:49.547079] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:22.970 [2024-11-26 18:58:49.547107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:22.970 [2024-11-26 18:58:49.547123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:22.970 request: 00:09:22.970 { 00:09:22.970 "name": "raid_bdev1", 00:09:22.970 "raid_level": "raid0", 00:09:22.970 "base_bdevs": [ 00:09:22.970 "malloc1", 00:09:22.970 "malloc2", 00:09:22.970 "malloc3" 00:09:22.970 ], 00:09:22.970 "strip_size_kb": 64, 00:09:22.970 "superblock": false, 00:09:22.970 "method": "bdev_raid_create", 00:09:22.970 "req_id": 1 00:09:22.970 } 00:09:22.970 Got JSON-RPC error response 00:09:22.970 response: 00:09:22.970 { 00:09:22.970 "code": -17, 00:09:22.970 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:22.970 } 00:09:22.970 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:22.970 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:22.970 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:22.970 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:22.970 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:22.970 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.970 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:22.970 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.970 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.970 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.228 [2024-11-26 18:58:49.603857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:23.228 [2024-11-26 18:58:49.604045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.228 [2024-11-26 18:58:49.604119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:23.228 [2024-11-26 18:58:49.604337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.228 [2024-11-26 18:58:49.607400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.228 [2024-11-26 18:58:49.607553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:23.228 [2024-11-26 18:58:49.607750] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:23.228 [2024-11-26 18:58:49.607936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:23.228 pt1 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.228 "name": "raid_bdev1", 00:09:23.228 "uuid": "e4f59d9f-4fcd-4d6d-a0fc-6a1799746128", 00:09:23.228 "strip_size_kb": 64, 00:09:23.228 "state": "configuring", 00:09:23.228 "raid_level": "raid0", 00:09:23.228 "superblock": true, 00:09:23.228 "num_base_bdevs": 3, 00:09:23.228 "num_base_bdevs_discovered": 1, 00:09:23.228 "num_base_bdevs_operational": 3, 00:09:23.228 "base_bdevs_list": [ 00:09:23.228 { 00:09:23.228 "name": "pt1", 00:09:23.228 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:23.228 "is_configured": true, 00:09:23.228 "data_offset": 2048, 00:09:23.228 "data_size": 63488 00:09:23.228 }, 00:09:23.228 { 00:09:23.228 "name": null, 00:09:23.228 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:23.228 "is_configured": false, 00:09:23.228 "data_offset": 2048, 00:09:23.228 "data_size": 63488 00:09:23.228 }, 00:09:23.228 { 00:09:23.228 "name": null, 00:09:23.228 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:23.228 "is_configured": false, 00:09:23.228 "data_offset": 2048, 00:09:23.228 "data_size": 63488 00:09:23.228 } 00:09:23.228 ] 00:09:23.228 }' 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.228 18:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.794 [2024-11-26 18:58:50.124564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:23.794 [2024-11-26 18:58:50.124664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.794 [2024-11-26 18:58:50.124708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:23.794 [2024-11-26 18:58:50.124724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.794 [2024-11-26 18:58:50.125380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.794 [2024-11-26 18:58:50.125422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:23.794 [2024-11-26 18:58:50.125542] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:23.794 [2024-11-26 18:58:50.125585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:23.794 pt2 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.794 [2024-11-26 18:58:50.132512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.794 "name": "raid_bdev1", 00:09:23.794 "uuid": "e4f59d9f-4fcd-4d6d-a0fc-6a1799746128", 00:09:23.794 "strip_size_kb": 64, 00:09:23.794 "state": "configuring", 00:09:23.794 "raid_level": "raid0", 00:09:23.794 "superblock": true, 00:09:23.794 "num_base_bdevs": 3, 00:09:23.794 "num_base_bdevs_discovered": 1, 00:09:23.794 "num_base_bdevs_operational": 3, 00:09:23.794 "base_bdevs_list": [ 00:09:23.794 { 00:09:23.794 "name": "pt1", 00:09:23.794 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:23.794 "is_configured": true, 00:09:23.794 "data_offset": 2048, 00:09:23.794 "data_size": 63488 00:09:23.794 }, 00:09:23.794 { 00:09:23.794 "name": null, 00:09:23.794 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:23.794 "is_configured": false, 00:09:23.794 "data_offset": 0, 00:09:23.794 "data_size": 63488 00:09:23.794 }, 00:09:23.794 { 00:09:23.794 "name": null, 00:09:23.794 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:23.794 "is_configured": false, 00:09:23.794 "data_offset": 2048, 00:09:23.794 "data_size": 63488 00:09:23.794 } 00:09:23.794 ] 00:09:23.794 }' 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.794 18:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.052 [2024-11-26 18:58:50.648701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:24.052 [2024-11-26 18:58:50.648797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.052 [2024-11-26 18:58:50.648841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:24.052 [2024-11-26 18:58:50.648862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.052 [2024-11-26 18:58:50.649532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.052 [2024-11-26 18:58:50.649575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:24.052 [2024-11-26 18:58:50.649697] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:24.052 [2024-11-26 18:58:50.649736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:24.052 pt2 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.052 [2024-11-26 18:58:50.660664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:24.052 [2024-11-26 18:58:50.660723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.052 [2024-11-26 18:58:50.660745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:24.052 [2024-11-26 18:58:50.660761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.052 [2024-11-26 18:58:50.661234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.052 [2024-11-26 18:58:50.661299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:24.052 [2024-11-26 18:58:50.661378] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:24.052 [2024-11-26 18:58:50.661411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:24.052 [2024-11-26 18:58:50.661571] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:24.052 [2024-11-26 18:58:50.661601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:24.052 [2024-11-26 18:58:50.661932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:24.052 [2024-11-26 18:58:50.662135] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:24.052 [2024-11-26 18:58:50.662159] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:24.052 [2024-11-26 18:58:50.662347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.052 pt3 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.052 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.053 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.053 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.053 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.053 18:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.053 18:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.310 18:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.310 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.310 "name": "raid_bdev1", 00:09:24.310 "uuid": "e4f59d9f-4fcd-4d6d-a0fc-6a1799746128", 00:09:24.310 "strip_size_kb": 64, 00:09:24.310 "state": "online", 00:09:24.310 "raid_level": "raid0", 00:09:24.310 "superblock": true, 00:09:24.310 "num_base_bdevs": 3, 00:09:24.310 "num_base_bdevs_discovered": 3, 00:09:24.310 "num_base_bdevs_operational": 3, 00:09:24.310 "base_bdevs_list": [ 00:09:24.310 { 00:09:24.310 "name": "pt1", 00:09:24.310 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:24.310 "is_configured": true, 00:09:24.310 "data_offset": 2048, 00:09:24.310 "data_size": 63488 00:09:24.310 }, 00:09:24.310 { 00:09:24.310 "name": "pt2", 00:09:24.310 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:24.310 "is_configured": true, 00:09:24.310 "data_offset": 2048, 00:09:24.310 "data_size": 63488 00:09:24.310 }, 00:09:24.310 { 00:09:24.310 "name": "pt3", 00:09:24.310 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:24.310 "is_configured": true, 00:09:24.310 "data_offset": 2048, 00:09:24.310 "data_size": 63488 00:09:24.310 } 00:09:24.310 ] 00:09:24.310 }' 00:09:24.310 18:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.310 18:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.568 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:24.568 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:24.568 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:24.568 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:24.568 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:24.568 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:24.568 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:24.568 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.568 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:24.568 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.568 [2024-11-26 18:58:51.173351] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:24.827 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.827 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:24.827 "name": "raid_bdev1", 00:09:24.827 "aliases": [ 00:09:24.827 "e4f59d9f-4fcd-4d6d-a0fc-6a1799746128" 00:09:24.827 ], 00:09:24.827 "product_name": "Raid Volume", 00:09:24.827 "block_size": 512, 00:09:24.827 "num_blocks": 190464, 00:09:24.827 "uuid": "e4f59d9f-4fcd-4d6d-a0fc-6a1799746128", 00:09:24.827 "assigned_rate_limits": { 00:09:24.827 "rw_ios_per_sec": 0, 00:09:24.827 "rw_mbytes_per_sec": 0, 00:09:24.827 "r_mbytes_per_sec": 0, 00:09:24.827 "w_mbytes_per_sec": 0 00:09:24.827 }, 00:09:24.827 "claimed": false, 00:09:24.827 "zoned": false, 00:09:24.827 "supported_io_types": { 00:09:24.827 "read": true, 00:09:24.827 "write": true, 00:09:24.827 "unmap": true, 00:09:24.827 "flush": true, 00:09:24.827 "reset": true, 00:09:24.827 "nvme_admin": false, 00:09:24.827 "nvme_io": false, 00:09:24.827 "nvme_io_md": false, 00:09:24.827 "write_zeroes": true, 00:09:24.827 "zcopy": false, 00:09:24.827 "get_zone_info": false, 00:09:24.827 "zone_management": false, 00:09:24.827 "zone_append": false, 00:09:24.827 "compare": false, 00:09:24.827 "compare_and_write": false, 00:09:24.827 "abort": false, 00:09:24.827 "seek_hole": false, 00:09:24.827 "seek_data": false, 00:09:24.827 "copy": false, 00:09:24.827 "nvme_iov_md": false 00:09:24.827 }, 00:09:24.827 "memory_domains": [ 00:09:24.827 { 00:09:24.827 "dma_device_id": "system", 00:09:24.827 "dma_device_type": 1 00:09:24.827 }, 00:09:24.827 { 00:09:24.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.827 "dma_device_type": 2 00:09:24.827 }, 00:09:24.827 { 00:09:24.827 "dma_device_id": "system", 00:09:24.827 "dma_device_type": 1 00:09:24.827 }, 00:09:24.827 { 00:09:24.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.828 "dma_device_type": 2 00:09:24.828 }, 00:09:24.828 { 00:09:24.828 "dma_device_id": "system", 00:09:24.828 "dma_device_type": 1 00:09:24.828 }, 00:09:24.828 { 00:09:24.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.828 "dma_device_type": 2 00:09:24.828 } 00:09:24.828 ], 00:09:24.828 "driver_specific": { 00:09:24.828 "raid": { 00:09:24.828 "uuid": "e4f59d9f-4fcd-4d6d-a0fc-6a1799746128", 00:09:24.828 "strip_size_kb": 64, 00:09:24.828 "state": "online", 00:09:24.828 "raid_level": "raid0", 00:09:24.828 "superblock": true, 00:09:24.828 "num_base_bdevs": 3, 00:09:24.828 "num_base_bdevs_discovered": 3, 00:09:24.828 "num_base_bdevs_operational": 3, 00:09:24.828 "base_bdevs_list": [ 00:09:24.828 { 00:09:24.828 "name": "pt1", 00:09:24.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:24.828 "is_configured": true, 00:09:24.828 "data_offset": 2048, 00:09:24.828 "data_size": 63488 00:09:24.828 }, 00:09:24.828 { 00:09:24.828 "name": "pt2", 00:09:24.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:24.828 "is_configured": true, 00:09:24.828 "data_offset": 2048, 00:09:24.828 "data_size": 63488 00:09:24.828 }, 00:09:24.828 { 00:09:24.828 "name": "pt3", 00:09:24.828 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:24.828 "is_configured": true, 00:09:24.828 "data_offset": 2048, 00:09:24.828 "data_size": 63488 00:09:24.828 } 00:09:24.828 ] 00:09:24.828 } 00:09:24.828 } 00:09:24.828 }' 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:24.828 pt2 00:09:24.828 pt3' 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.828 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.089 [2024-11-26 18:58:51.521305] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e4f59d9f-4fcd-4d6d-a0fc-6a1799746128 '!=' e4f59d9f-4fcd-4d6d-a0fc-6a1799746128 ']' 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65414 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65414 ']' 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65414 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65414 00:09:25.089 killing process with pid 65414 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65414' 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65414 00:09:25.089 18:58:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65414 00:09:25.089 [2024-11-26 18:58:51.601451] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:25.089 [2024-11-26 18:58:51.601591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.089 [2024-11-26 18:58:51.601678] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.089 [2024-11-26 18:58:51.601710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:25.347 [2024-11-26 18:58:51.912116] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:26.721 ************************************ 00:09:26.721 END TEST raid_superblock_test 00:09:26.721 ************************************ 00:09:26.721 18:58:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:26.721 00:09:26.721 real 0m5.838s 00:09:26.721 user 0m8.635s 00:09:26.721 sys 0m0.863s 00:09:26.721 18:58:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.721 18:58:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.721 18:58:53 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:26.721 18:58:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:26.721 18:58:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.721 18:58:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:26.721 ************************************ 00:09:26.721 START TEST raid_read_error_test 00:09:26.721 ************************************ 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WDqeBAqIJs 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65673 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65673 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65673 ']' 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.721 18:58:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.722 [2024-11-26 18:58:53.296553] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:09:26.722 [2024-11-26 18:58:53.297623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65673 ] 00:09:26.981 [2024-11-26 18:58:53.494757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.239 [2024-11-26 18:58:53.671186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.497 [2024-11-26 18:58:53.896166] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.498 [2024-11-26 18:58:53.896282] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.064 BaseBdev1_malloc 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.064 true 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.064 [2024-11-26 18:58:54.465324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:28.064 [2024-11-26 18:58:54.465421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.064 [2024-11-26 18:58:54.465477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:28.064 [2024-11-26 18:58:54.465495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.064 [2024-11-26 18:58:54.468771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.064 [2024-11-26 18:58:54.468835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:28.064 BaseBdev1 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.064 BaseBdev2_malloc 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.064 true 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.064 [2024-11-26 18:58:54.531879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:28.064 [2024-11-26 18:58:54.532000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.064 [2024-11-26 18:58:54.532039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:28.064 [2024-11-26 18:58:54.532062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.064 [2024-11-26 18:58:54.535153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.064 [2024-11-26 18:58:54.535233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:28.064 BaseBdev2 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.064 BaseBdev3_malloc 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.064 true 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.064 [2024-11-26 18:58:54.612960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:28.064 [2024-11-26 18:58:54.613061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.064 [2024-11-26 18:58:54.613088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:28.064 [2024-11-26 18:58:54.613104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.064 [2024-11-26 18:58:54.616229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.064 [2024-11-26 18:58:54.616318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:28.064 BaseBdev3 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.064 [2024-11-26 18:58:54.621215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.064 [2024-11-26 18:58:54.623894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.064 [2024-11-26 18:58:54.624032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:28.064 [2024-11-26 18:58:54.624313] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:28.064 [2024-11-26 18:58:54.624335] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:28.064 [2024-11-26 18:58:54.624644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:28.064 [2024-11-26 18:58:54.624898] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:28.064 [2024-11-26 18:58:54.624922] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:28.064 [2024-11-26 18:58:54.625152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.064 "name": "raid_bdev1", 00:09:28.064 "uuid": "a5654db1-29b6-4e07-9ca9-d410d0b02411", 00:09:28.064 "strip_size_kb": 64, 00:09:28.064 "state": "online", 00:09:28.064 "raid_level": "raid0", 00:09:28.064 "superblock": true, 00:09:28.064 "num_base_bdevs": 3, 00:09:28.064 "num_base_bdevs_discovered": 3, 00:09:28.064 "num_base_bdevs_operational": 3, 00:09:28.064 "base_bdevs_list": [ 00:09:28.064 { 00:09:28.064 "name": "BaseBdev1", 00:09:28.064 "uuid": "21c80c4d-d819-5f6c-be87-0ae4611ea99c", 00:09:28.064 "is_configured": true, 00:09:28.064 "data_offset": 2048, 00:09:28.064 "data_size": 63488 00:09:28.064 }, 00:09:28.064 { 00:09:28.064 "name": "BaseBdev2", 00:09:28.064 "uuid": "f0f101f7-7cff-55ca-acce-55c8b7c1cdb1", 00:09:28.064 "is_configured": true, 00:09:28.064 "data_offset": 2048, 00:09:28.064 "data_size": 63488 00:09:28.064 }, 00:09:28.064 { 00:09:28.064 "name": "BaseBdev3", 00:09:28.064 "uuid": "72e0aeff-6aa7-5bed-b886-e6c11c4f8c39", 00:09:28.064 "is_configured": true, 00:09:28.064 "data_offset": 2048, 00:09:28.064 "data_size": 63488 00:09:28.064 } 00:09:28.064 ] 00:09:28.064 }' 00:09:28.064 18:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.065 18:58:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.629 18:58:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:28.629 18:58:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:28.887 [2024-11-26 18:58:55.279105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.819 "name": "raid_bdev1", 00:09:29.819 "uuid": "a5654db1-29b6-4e07-9ca9-d410d0b02411", 00:09:29.819 "strip_size_kb": 64, 00:09:29.819 "state": "online", 00:09:29.819 "raid_level": "raid0", 00:09:29.819 "superblock": true, 00:09:29.819 "num_base_bdevs": 3, 00:09:29.819 "num_base_bdevs_discovered": 3, 00:09:29.819 "num_base_bdevs_operational": 3, 00:09:29.819 "base_bdevs_list": [ 00:09:29.819 { 00:09:29.819 "name": "BaseBdev1", 00:09:29.819 "uuid": "21c80c4d-d819-5f6c-be87-0ae4611ea99c", 00:09:29.819 "is_configured": true, 00:09:29.819 "data_offset": 2048, 00:09:29.819 "data_size": 63488 00:09:29.819 }, 00:09:29.819 { 00:09:29.819 "name": "BaseBdev2", 00:09:29.819 "uuid": "f0f101f7-7cff-55ca-acce-55c8b7c1cdb1", 00:09:29.819 "is_configured": true, 00:09:29.819 "data_offset": 2048, 00:09:29.819 "data_size": 63488 00:09:29.819 }, 00:09:29.819 { 00:09:29.819 "name": "BaseBdev3", 00:09:29.819 "uuid": "72e0aeff-6aa7-5bed-b886-e6c11c4f8c39", 00:09:29.819 "is_configured": true, 00:09:29.819 "data_offset": 2048, 00:09:29.819 "data_size": 63488 00:09:29.819 } 00:09:29.819 ] 00:09:29.819 }' 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.819 18:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.089 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:30.089 18:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.089 18:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.089 [2024-11-26 18:58:56.693755] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:30.089 [2024-11-26 18:58:56.693797] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.089 [2024-11-26 18:58:56.697307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.089 [2024-11-26 18:58:56.697366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.089 [2024-11-26 18:58:56.697431] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.089 [2024-11-26 18:58:56.697446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:30.089 { 00:09:30.089 "results": [ 00:09:30.089 { 00:09:30.089 "job": "raid_bdev1", 00:09:30.089 "core_mask": "0x1", 00:09:30.089 "workload": "randrw", 00:09:30.089 "percentage": 50, 00:09:30.089 "status": "finished", 00:09:30.089 "queue_depth": 1, 00:09:30.089 "io_size": 131072, 00:09:30.089 "runtime": 1.412024, 00:09:30.089 "iops": 9664.84988923701, 00:09:30.089 "mibps": 1208.1062361546262, 00:09:30.089 "io_failed": 1, 00:09:30.089 "io_timeout": 0, 00:09:30.089 "avg_latency_us": 145.2456698284131, 00:09:30.089 "min_latency_us": 28.974545454545453, 00:09:30.089 "max_latency_us": 1846.9236363636364 00:09:30.089 } 00:09:30.089 ], 00:09:30.089 "core_count": 1 00:09:30.089 } 00:09:30.089 18:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.089 18:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65673 00:09:30.089 18:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65673 ']' 00:09:30.089 18:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65673 00:09:30.089 18:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:30.391 18:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.391 18:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65673 00:09:30.391 18:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.391 killing process with pid 65673 00:09:30.391 18:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.391 18:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65673' 00:09:30.391 18:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65673 00:09:30.391 [2024-11-26 18:58:56.735784] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.391 18:58:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65673 00:09:30.391 [2024-11-26 18:58:56.974191] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:31.761 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WDqeBAqIJs 00:09:31.761 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:31.761 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:31.761 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:31.761 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:31.761 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:31.761 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:31.761 18:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:31.761 00:09:31.761 real 0m5.073s 00:09:31.761 user 0m6.251s 00:09:31.761 sys 0m0.693s 00:09:31.761 18:58:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.761 ************************************ 00:09:31.761 END TEST raid_read_error_test 00:09:31.761 ************************************ 00:09:31.761 18:58:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.761 18:58:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:31.761 18:58:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:31.761 18:58:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.761 18:58:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:31.761 ************************************ 00:09:31.761 START TEST raid_write_error_test 00:09:31.761 ************************************ 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LBaLt9s2Pq 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65824 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65824 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65824 ']' 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.761 18:58:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.762 18:58:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.762 18:58:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.762 18:58:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.019 [2024-11-26 18:58:58.404847] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:09:32.019 [2024-11-26 18:58:58.405009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65824 ] 00:09:32.019 [2024-11-26 18:58:58.580951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.277 [2024-11-26 18:58:58.735345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.536 [2024-11-26 18:58:58.973585] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.536 [2024-11-26 18:58:58.973635] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.792 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.792 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:32.792 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:32.792 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:32.792 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.792 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.051 BaseBdev1_malloc 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.051 true 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.051 [2024-11-26 18:58:59.442178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:33.051 [2024-11-26 18:58:59.442266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.051 [2024-11-26 18:58:59.442311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:33.051 [2024-11-26 18:58:59.442330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.051 [2024-11-26 18:58:59.445248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.051 [2024-11-26 18:58:59.445308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:33.051 BaseBdev1 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.051 BaseBdev2_malloc 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.051 true 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.051 [2024-11-26 18:58:59.507840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:33.051 [2024-11-26 18:58:59.507915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.051 [2024-11-26 18:58:59.507941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:33.051 [2024-11-26 18:58:59.507957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.051 [2024-11-26 18:58:59.510895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.051 [2024-11-26 18:58:59.510989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:33.051 BaseBdev2 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.051 BaseBdev3_malloc 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.051 true 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.051 [2024-11-26 18:58:59.581737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:33.051 [2024-11-26 18:58:59.581811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.051 [2024-11-26 18:58:59.581840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:33.051 [2024-11-26 18:58:59.581857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.051 [2024-11-26 18:58:59.584887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.051 [2024-11-26 18:58:59.584965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:33.051 BaseBdev3 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.051 [2024-11-26 18:58:59.589846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.051 [2024-11-26 18:58:59.592374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:33.051 [2024-11-26 18:58:59.592485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:33.051 [2024-11-26 18:58:59.592752] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:33.051 [2024-11-26 18:58:59.592773] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:33.051 [2024-11-26 18:58:59.593096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:33.051 [2024-11-26 18:58:59.593360] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:33.051 [2024-11-26 18:58:59.593387] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:33.051 [2024-11-26 18:58:59.593573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.051 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.052 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.052 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.052 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.052 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.052 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.052 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.052 "name": "raid_bdev1", 00:09:33.052 "uuid": "48e729bf-cb23-40bb-9ad9-e6780b41c0f1", 00:09:33.052 "strip_size_kb": 64, 00:09:33.052 "state": "online", 00:09:33.052 "raid_level": "raid0", 00:09:33.052 "superblock": true, 00:09:33.052 "num_base_bdevs": 3, 00:09:33.052 "num_base_bdevs_discovered": 3, 00:09:33.052 "num_base_bdevs_operational": 3, 00:09:33.052 "base_bdevs_list": [ 00:09:33.052 { 00:09:33.052 "name": "BaseBdev1", 00:09:33.052 "uuid": "287fc5dd-d564-544f-8262-f9a4ca5a93b4", 00:09:33.052 "is_configured": true, 00:09:33.052 "data_offset": 2048, 00:09:33.052 "data_size": 63488 00:09:33.052 }, 00:09:33.052 { 00:09:33.052 "name": "BaseBdev2", 00:09:33.052 "uuid": "2a9d481c-9382-5ed0-91fa-a8ef8d5432df", 00:09:33.052 "is_configured": true, 00:09:33.052 "data_offset": 2048, 00:09:33.052 "data_size": 63488 00:09:33.052 }, 00:09:33.052 { 00:09:33.052 "name": "BaseBdev3", 00:09:33.052 "uuid": "2fd98162-895b-5654-9c91-a1d536ede598", 00:09:33.052 "is_configured": true, 00:09:33.052 "data_offset": 2048, 00:09:33.052 "data_size": 63488 00:09:33.052 } 00:09:33.052 ] 00:09:33.052 }' 00:09:33.052 18:58:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.052 18:58:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.618 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:33.618 18:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:33.876 [2024-11-26 18:59:00.251515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.811 "name": "raid_bdev1", 00:09:34.811 "uuid": "48e729bf-cb23-40bb-9ad9-e6780b41c0f1", 00:09:34.811 "strip_size_kb": 64, 00:09:34.811 "state": "online", 00:09:34.811 "raid_level": "raid0", 00:09:34.811 "superblock": true, 00:09:34.811 "num_base_bdevs": 3, 00:09:34.811 "num_base_bdevs_discovered": 3, 00:09:34.811 "num_base_bdevs_operational": 3, 00:09:34.811 "base_bdevs_list": [ 00:09:34.811 { 00:09:34.811 "name": "BaseBdev1", 00:09:34.811 "uuid": "287fc5dd-d564-544f-8262-f9a4ca5a93b4", 00:09:34.811 "is_configured": true, 00:09:34.811 "data_offset": 2048, 00:09:34.811 "data_size": 63488 00:09:34.811 }, 00:09:34.811 { 00:09:34.811 "name": "BaseBdev2", 00:09:34.811 "uuid": "2a9d481c-9382-5ed0-91fa-a8ef8d5432df", 00:09:34.811 "is_configured": true, 00:09:34.811 "data_offset": 2048, 00:09:34.811 "data_size": 63488 00:09:34.811 }, 00:09:34.811 { 00:09:34.811 "name": "BaseBdev3", 00:09:34.811 "uuid": "2fd98162-895b-5654-9c91-a1d536ede598", 00:09:34.811 "is_configured": true, 00:09:34.811 "data_offset": 2048, 00:09:34.811 "data_size": 63488 00:09:34.811 } 00:09:34.811 ] 00:09:34.811 }' 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.811 18:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.378 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:35.378 18:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.378 18:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.378 [2024-11-26 18:59:01.698230] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:35.378 [2024-11-26 18:59:01.698272] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.378 [2024-11-26 18:59:01.701963] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.378 [2024-11-26 18:59:01.702043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.378 [2024-11-26 18:59:01.702107] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.378 [2024-11-26 18:59:01.702122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:35.378 { 00:09:35.378 "results": [ 00:09:35.378 { 00:09:35.378 "job": "raid_bdev1", 00:09:35.378 "core_mask": "0x1", 00:09:35.378 "workload": "randrw", 00:09:35.378 "percentage": 50, 00:09:35.378 "status": "finished", 00:09:35.378 "queue_depth": 1, 00:09:35.378 "io_size": 131072, 00:09:35.378 "runtime": 1.444401, 00:09:35.378 "iops": 9662.136761190279, 00:09:35.378 "mibps": 1207.7670951487848, 00:09:35.378 "io_failed": 1, 00:09:35.378 "io_timeout": 0, 00:09:35.378 "avg_latency_us": 145.55199176692048, 00:09:35.378 "min_latency_us": 39.79636363636364, 00:09:35.378 "max_latency_us": 1921.3963636363637 00:09:35.378 } 00:09:35.378 ], 00:09:35.378 "core_count": 1 00:09:35.378 } 00:09:35.378 18:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.378 18:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65824 00:09:35.379 18:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65824 ']' 00:09:35.379 18:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65824 00:09:35.379 18:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:35.379 18:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.379 18:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65824 00:09:35.379 18:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.379 killing process with pid 65824 00:09:35.379 18:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.379 18:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65824' 00:09:35.379 18:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65824 00:09:35.379 [2024-11-26 18:59:01.739076] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:35.379 18:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65824 00:09:35.379 [2024-11-26 18:59:01.971349] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:36.753 18:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:36.753 18:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LBaLt9s2Pq 00:09:36.753 18:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:36.753 18:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:09:36.753 18:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:36.753 18:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:36.753 18:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:36.753 18:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:09:36.753 00:09:36.753 real 0m4.926s 00:09:36.753 user 0m6.053s 00:09:36.753 sys 0m0.639s 00:09:36.753 18:59:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.753 18:59:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.753 ************************************ 00:09:36.753 END TEST raid_write_error_test 00:09:36.753 ************************************ 00:09:36.753 18:59:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:36.753 18:59:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:36.753 18:59:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:36.753 18:59:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.753 18:59:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:36.753 ************************************ 00:09:36.753 START TEST raid_state_function_test 00:09:36.753 ************************************ 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65968 00:09:36.753 Process raid pid: 65968 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65968' 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65968 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65968 ']' 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.753 18:59:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.011 [2024-11-26 18:59:03.391833] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:09:37.011 [2024-11-26 18:59:03.392536] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.011 [2024-11-26 18:59:03.580873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.269 [2024-11-26 18:59:03.736261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.527 [2024-11-26 18:59:03.973451] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.527 [2024-11-26 18:59:03.973515] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.094 18:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.094 18:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:38.094 18:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:38.094 18:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.094 18:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.094 [2024-11-26 18:59:04.445979] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:38.094 [2024-11-26 18:59:04.446055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:38.094 [2024-11-26 18:59:04.446072] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:38.094 [2024-11-26 18:59:04.446088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:38.094 [2024-11-26 18:59:04.446098] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:38.094 [2024-11-26 18:59:04.446112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:38.094 18:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.094 18:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:38.094 18:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.094 18:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.094 18:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.094 18:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.094 18:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.094 18:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.094 18:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.094 18:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.094 18:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.094 18:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.094 18:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.094 18:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.094 18:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.094 18:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.094 18:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.094 "name": "Existed_Raid", 00:09:38.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.094 "strip_size_kb": 64, 00:09:38.094 "state": "configuring", 00:09:38.094 "raid_level": "concat", 00:09:38.094 "superblock": false, 00:09:38.095 "num_base_bdevs": 3, 00:09:38.095 "num_base_bdevs_discovered": 0, 00:09:38.095 "num_base_bdevs_operational": 3, 00:09:38.095 "base_bdevs_list": [ 00:09:38.095 { 00:09:38.095 "name": "BaseBdev1", 00:09:38.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.095 "is_configured": false, 00:09:38.095 "data_offset": 0, 00:09:38.095 "data_size": 0 00:09:38.095 }, 00:09:38.095 { 00:09:38.095 "name": "BaseBdev2", 00:09:38.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.095 "is_configured": false, 00:09:38.095 "data_offset": 0, 00:09:38.095 "data_size": 0 00:09:38.095 }, 00:09:38.095 { 00:09:38.095 "name": "BaseBdev3", 00:09:38.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.095 "is_configured": false, 00:09:38.095 "data_offset": 0, 00:09:38.095 "data_size": 0 00:09:38.095 } 00:09:38.095 ] 00:09:38.095 }' 00:09:38.095 18:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.095 18:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.353 18:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:38.353 18:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.353 18:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.353 [2024-11-26 18:59:04.970096] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:38.353 [2024-11-26 18:59:04.970155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:38.611 18:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.611 18:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:38.611 18:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.611 18:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.611 [2024-11-26 18:59:04.978098] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:38.611 [2024-11-26 18:59:04.978164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:38.611 [2024-11-26 18:59:04.978178] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:38.611 [2024-11-26 18:59:04.978192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:38.611 [2024-11-26 18:59:04.978201] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:38.611 [2024-11-26 18:59:04.978214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:38.611 18:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.611 18:59:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:38.611 18:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.611 18:59:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.611 [2024-11-26 18:59:05.027108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.611 BaseBdev1 00:09:38.611 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.611 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:38.611 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:38.611 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.611 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:38.611 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.611 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.611 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:38.611 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.611 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.611 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.612 [ 00:09:38.612 { 00:09:38.612 "name": "BaseBdev1", 00:09:38.612 "aliases": [ 00:09:38.612 "76fd85a2-5a5a-4734-a8ff-8c7891a1c6f9" 00:09:38.612 ], 00:09:38.612 "product_name": "Malloc disk", 00:09:38.612 "block_size": 512, 00:09:38.612 "num_blocks": 65536, 00:09:38.612 "uuid": "76fd85a2-5a5a-4734-a8ff-8c7891a1c6f9", 00:09:38.612 "assigned_rate_limits": { 00:09:38.612 "rw_ios_per_sec": 0, 00:09:38.612 "rw_mbytes_per_sec": 0, 00:09:38.612 "r_mbytes_per_sec": 0, 00:09:38.612 "w_mbytes_per_sec": 0 00:09:38.612 }, 00:09:38.612 "claimed": true, 00:09:38.612 "claim_type": "exclusive_write", 00:09:38.612 "zoned": false, 00:09:38.612 "supported_io_types": { 00:09:38.612 "read": true, 00:09:38.612 "write": true, 00:09:38.612 "unmap": true, 00:09:38.612 "flush": true, 00:09:38.612 "reset": true, 00:09:38.612 "nvme_admin": false, 00:09:38.612 "nvme_io": false, 00:09:38.612 "nvme_io_md": false, 00:09:38.612 "write_zeroes": true, 00:09:38.612 "zcopy": true, 00:09:38.612 "get_zone_info": false, 00:09:38.612 "zone_management": false, 00:09:38.612 "zone_append": false, 00:09:38.612 "compare": false, 00:09:38.612 "compare_and_write": false, 00:09:38.612 "abort": true, 00:09:38.612 "seek_hole": false, 00:09:38.612 "seek_data": false, 00:09:38.612 "copy": true, 00:09:38.612 "nvme_iov_md": false 00:09:38.612 }, 00:09:38.612 "memory_domains": [ 00:09:38.612 { 00:09:38.612 "dma_device_id": "system", 00:09:38.612 "dma_device_type": 1 00:09:38.612 }, 00:09:38.612 { 00:09:38.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.612 "dma_device_type": 2 00:09:38.612 } 00:09:38.612 ], 00:09:38.612 "driver_specific": {} 00:09:38.612 } 00:09:38.612 ] 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.612 "name": "Existed_Raid", 00:09:38.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.612 "strip_size_kb": 64, 00:09:38.612 "state": "configuring", 00:09:38.612 "raid_level": "concat", 00:09:38.612 "superblock": false, 00:09:38.612 "num_base_bdevs": 3, 00:09:38.612 "num_base_bdevs_discovered": 1, 00:09:38.612 "num_base_bdevs_operational": 3, 00:09:38.612 "base_bdevs_list": [ 00:09:38.612 { 00:09:38.612 "name": "BaseBdev1", 00:09:38.612 "uuid": "76fd85a2-5a5a-4734-a8ff-8c7891a1c6f9", 00:09:38.612 "is_configured": true, 00:09:38.612 "data_offset": 0, 00:09:38.612 "data_size": 65536 00:09:38.612 }, 00:09:38.612 { 00:09:38.612 "name": "BaseBdev2", 00:09:38.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.612 "is_configured": false, 00:09:38.612 "data_offset": 0, 00:09:38.612 "data_size": 0 00:09:38.612 }, 00:09:38.612 { 00:09:38.612 "name": "BaseBdev3", 00:09:38.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.612 "is_configured": false, 00:09:38.612 "data_offset": 0, 00:09:38.612 "data_size": 0 00:09:38.612 } 00:09:38.612 ] 00:09:38.612 }' 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.612 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.179 [2024-11-26 18:59:05.587346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:39.179 [2024-11-26 18:59:05.587435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.179 [2024-11-26 18:59:05.595407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:39.179 [2024-11-26 18:59:05.598192] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.179 [2024-11-26 18:59:05.598253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.179 [2024-11-26 18:59:05.598271] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:39.179 [2024-11-26 18:59:05.598299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.179 "name": "Existed_Raid", 00:09:39.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.179 "strip_size_kb": 64, 00:09:39.179 "state": "configuring", 00:09:39.179 "raid_level": "concat", 00:09:39.179 "superblock": false, 00:09:39.179 "num_base_bdevs": 3, 00:09:39.179 "num_base_bdevs_discovered": 1, 00:09:39.179 "num_base_bdevs_operational": 3, 00:09:39.179 "base_bdevs_list": [ 00:09:39.179 { 00:09:39.179 "name": "BaseBdev1", 00:09:39.179 "uuid": "76fd85a2-5a5a-4734-a8ff-8c7891a1c6f9", 00:09:39.179 "is_configured": true, 00:09:39.179 "data_offset": 0, 00:09:39.179 "data_size": 65536 00:09:39.179 }, 00:09:39.179 { 00:09:39.179 "name": "BaseBdev2", 00:09:39.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.179 "is_configured": false, 00:09:39.179 "data_offset": 0, 00:09:39.179 "data_size": 0 00:09:39.179 }, 00:09:39.179 { 00:09:39.179 "name": "BaseBdev3", 00:09:39.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.179 "is_configured": false, 00:09:39.179 "data_offset": 0, 00:09:39.179 "data_size": 0 00:09:39.179 } 00:09:39.179 ] 00:09:39.179 }' 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.179 18:59:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.746 BaseBdev2 00:09:39.746 [2024-11-26 18:59:06.182564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.746 [ 00:09:39.746 { 00:09:39.746 "name": "BaseBdev2", 00:09:39.746 "aliases": [ 00:09:39.746 "b2a924be-ab7b-4ca0-bdb0-7a34420d37e4" 00:09:39.746 ], 00:09:39.746 "product_name": "Malloc disk", 00:09:39.746 "block_size": 512, 00:09:39.746 "num_blocks": 65536, 00:09:39.746 "uuid": "b2a924be-ab7b-4ca0-bdb0-7a34420d37e4", 00:09:39.746 "assigned_rate_limits": { 00:09:39.746 "rw_ios_per_sec": 0, 00:09:39.746 "rw_mbytes_per_sec": 0, 00:09:39.746 "r_mbytes_per_sec": 0, 00:09:39.746 "w_mbytes_per_sec": 0 00:09:39.746 }, 00:09:39.746 "claimed": true, 00:09:39.746 "claim_type": "exclusive_write", 00:09:39.746 "zoned": false, 00:09:39.746 "supported_io_types": { 00:09:39.746 "read": true, 00:09:39.746 "write": true, 00:09:39.746 "unmap": true, 00:09:39.746 "flush": true, 00:09:39.746 "reset": true, 00:09:39.746 "nvme_admin": false, 00:09:39.746 "nvme_io": false, 00:09:39.746 "nvme_io_md": false, 00:09:39.746 "write_zeroes": true, 00:09:39.746 "zcopy": true, 00:09:39.746 "get_zone_info": false, 00:09:39.746 "zone_management": false, 00:09:39.746 "zone_append": false, 00:09:39.746 "compare": false, 00:09:39.746 "compare_and_write": false, 00:09:39.746 "abort": true, 00:09:39.746 "seek_hole": false, 00:09:39.746 "seek_data": false, 00:09:39.746 "copy": true, 00:09:39.746 "nvme_iov_md": false 00:09:39.746 }, 00:09:39.746 "memory_domains": [ 00:09:39.746 { 00:09:39.746 "dma_device_id": "system", 00:09:39.746 "dma_device_type": 1 00:09:39.746 }, 00:09:39.746 { 00:09:39.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.746 "dma_device_type": 2 00:09:39.746 } 00:09:39.746 ], 00:09:39.746 "driver_specific": {} 00:09:39.746 } 00:09:39.746 ] 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.746 "name": "Existed_Raid", 00:09:39.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.746 "strip_size_kb": 64, 00:09:39.746 "state": "configuring", 00:09:39.746 "raid_level": "concat", 00:09:39.746 "superblock": false, 00:09:39.746 "num_base_bdevs": 3, 00:09:39.746 "num_base_bdevs_discovered": 2, 00:09:39.746 "num_base_bdevs_operational": 3, 00:09:39.746 "base_bdevs_list": [ 00:09:39.746 { 00:09:39.746 "name": "BaseBdev1", 00:09:39.746 "uuid": "76fd85a2-5a5a-4734-a8ff-8c7891a1c6f9", 00:09:39.746 "is_configured": true, 00:09:39.746 "data_offset": 0, 00:09:39.746 "data_size": 65536 00:09:39.746 }, 00:09:39.746 { 00:09:39.746 "name": "BaseBdev2", 00:09:39.746 "uuid": "b2a924be-ab7b-4ca0-bdb0-7a34420d37e4", 00:09:39.746 "is_configured": true, 00:09:39.746 "data_offset": 0, 00:09:39.746 "data_size": 65536 00:09:39.746 }, 00:09:39.746 { 00:09:39.746 "name": "BaseBdev3", 00:09:39.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.746 "is_configured": false, 00:09:39.746 "data_offset": 0, 00:09:39.746 "data_size": 0 00:09:39.746 } 00:09:39.746 ] 00:09:39.746 }' 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.746 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.312 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:40.312 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.312 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.312 [2024-11-26 18:59:06.828129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:40.312 BaseBdev3 00:09:40.312 [2024-11-26 18:59:06.828425] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:40.312 [2024-11-26 18:59:06.828461] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:40.312 [2024-11-26 18:59:06.828839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:40.312 [2024-11-26 18:59:06.829160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:40.312 [2024-11-26 18:59:06.829179] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:40.312 [2024-11-26 18:59:06.829537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.312 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.312 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:40.312 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:40.312 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.312 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:40.312 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.312 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.312 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.312 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.313 [ 00:09:40.313 { 00:09:40.313 "name": "BaseBdev3", 00:09:40.313 "aliases": [ 00:09:40.313 "96b33366-7457-49a1-9cff-44680e863cb0" 00:09:40.313 ], 00:09:40.313 "product_name": "Malloc disk", 00:09:40.313 "block_size": 512, 00:09:40.313 "num_blocks": 65536, 00:09:40.313 "uuid": "96b33366-7457-49a1-9cff-44680e863cb0", 00:09:40.313 "assigned_rate_limits": { 00:09:40.313 "rw_ios_per_sec": 0, 00:09:40.313 "rw_mbytes_per_sec": 0, 00:09:40.313 "r_mbytes_per_sec": 0, 00:09:40.313 "w_mbytes_per_sec": 0 00:09:40.313 }, 00:09:40.313 "claimed": true, 00:09:40.313 "claim_type": "exclusive_write", 00:09:40.313 "zoned": false, 00:09:40.313 "supported_io_types": { 00:09:40.313 "read": true, 00:09:40.313 "write": true, 00:09:40.313 "unmap": true, 00:09:40.313 "flush": true, 00:09:40.313 "reset": true, 00:09:40.313 "nvme_admin": false, 00:09:40.313 "nvme_io": false, 00:09:40.313 "nvme_io_md": false, 00:09:40.313 "write_zeroes": true, 00:09:40.313 "zcopy": true, 00:09:40.313 "get_zone_info": false, 00:09:40.313 "zone_management": false, 00:09:40.313 "zone_append": false, 00:09:40.313 "compare": false, 00:09:40.313 "compare_and_write": false, 00:09:40.313 "abort": true, 00:09:40.313 "seek_hole": false, 00:09:40.313 "seek_data": false, 00:09:40.313 "copy": true, 00:09:40.313 "nvme_iov_md": false 00:09:40.313 }, 00:09:40.313 "memory_domains": [ 00:09:40.313 { 00:09:40.313 "dma_device_id": "system", 00:09:40.313 "dma_device_type": 1 00:09:40.313 }, 00:09:40.313 { 00:09:40.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.313 "dma_device_type": 2 00:09:40.313 } 00:09:40.313 ], 00:09:40.313 "driver_specific": {} 00:09:40.313 } 00:09:40.313 ] 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.313 "name": "Existed_Raid", 00:09:40.313 "uuid": "80dcb5a5-64fd-4b2f-9248-67df51ea1c8d", 00:09:40.313 "strip_size_kb": 64, 00:09:40.313 "state": "online", 00:09:40.313 "raid_level": "concat", 00:09:40.313 "superblock": false, 00:09:40.313 "num_base_bdevs": 3, 00:09:40.313 "num_base_bdevs_discovered": 3, 00:09:40.313 "num_base_bdevs_operational": 3, 00:09:40.313 "base_bdevs_list": [ 00:09:40.313 { 00:09:40.313 "name": "BaseBdev1", 00:09:40.313 "uuid": "76fd85a2-5a5a-4734-a8ff-8c7891a1c6f9", 00:09:40.313 "is_configured": true, 00:09:40.313 "data_offset": 0, 00:09:40.313 "data_size": 65536 00:09:40.313 }, 00:09:40.313 { 00:09:40.313 "name": "BaseBdev2", 00:09:40.313 "uuid": "b2a924be-ab7b-4ca0-bdb0-7a34420d37e4", 00:09:40.313 "is_configured": true, 00:09:40.313 "data_offset": 0, 00:09:40.313 "data_size": 65536 00:09:40.313 }, 00:09:40.313 { 00:09:40.313 "name": "BaseBdev3", 00:09:40.313 "uuid": "96b33366-7457-49a1-9cff-44680e863cb0", 00:09:40.313 "is_configured": true, 00:09:40.313 "data_offset": 0, 00:09:40.313 "data_size": 65536 00:09:40.313 } 00:09:40.313 ] 00:09:40.313 }' 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.313 18:59:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.879 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:40.879 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:40.879 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:40.879 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:40.879 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:40.879 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:40.879 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:40.879 18:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.879 18:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.879 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:40.879 [2024-11-26 18:59:07.396772] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.879 18:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.879 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:40.879 "name": "Existed_Raid", 00:09:40.879 "aliases": [ 00:09:40.879 "80dcb5a5-64fd-4b2f-9248-67df51ea1c8d" 00:09:40.879 ], 00:09:40.879 "product_name": "Raid Volume", 00:09:40.879 "block_size": 512, 00:09:40.879 "num_blocks": 196608, 00:09:40.879 "uuid": "80dcb5a5-64fd-4b2f-9248-67df51ea1c8d", 00:09:40.879 "assigned_rate_limits": { 00:09:40.879 "rw_ios_per_sec": 0, 00:09:40.879 "rw_mbytes_per_sec": 0, 00:09:40.879 "r_mbytes_per_sec": 0, 00:09:40.879 "w_mbytes_per_sec": 0 00:09:40.879 }, 00:09:40.879 "claimed": false, 00:09:40.879 "zoned": false, 00:09:40.879 "supported_io_types": { 00:09:40.879 "read": true, 00:09:40.879 "write": true, 00:09:40.879 "unmap": true, 00:09:40.879 "flush": true, 00:09:40.879 "reset": true, 00:09:40.879 "nvme_admin": false, 00:09:40.879 "nvme_io": false, 00:09:40.879 "nvme_io_md": false, 00:09:40.879 "write_zeroes": true, 00:09:40.879 "zcopy": false, 00:09:40.879 "get_zone_info": false, 00:09:40.879 "zone_management": false, 00:09:40.879 "zone_append": false, 00:09:40.879 "compare": false, 00:09:40.879 "compare_and_write": false, 00:09:40.879 "abort": false, 00:09:40.879 "seek_hole": false, 00:09:40.879 "seek_data": false, 00:09:40.879 "copy": false, 00:09:40.879 "nvme_iov_md": false 00:09:40.879 }, 00:09:40.879 "memory_domains": [ 00:09:40.879 { 00:09:40.879 "dma_device_id": "system", 00:09:40.879 "dma_device_type": 1 00:09:40.879 }, 00:09:40.879 { 00:09:40.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.879 "dma_device_type": 2 00:09:40.879 }, 00:09:40.879 { 00:09:40.879 "dma_device_id": "system", 00:09:40.879 "dma_device_type": 1 00:09:40.879 }, 00:09:40.879 { 00:09:40.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.879 "dma_device_type": 2 00:09:40.879 }, 00:09:40.879 { 00:09:40.879 "dma_device_id": "system", 00:09:40.879 "dma_device_type": 1 00:09:40.879 }, 00:09:40.879 { 00:09:40.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.879 "dma_device_type": 2 00:09:40.879 } 00:09:40.879 ], 00:09:40.879 "driver_specific": { 00:09:40.879 "raid": { 00:09:40.879 "uuid": "80dcb5a5-64fd-4b2f-9248-67df51ea1c8d", 00:09:40.879 "strip_size_kb": 64, 00:09:40.879 "state": "online", 00:09:40.879 "raid_level": "concat", 00:09:40.879 "superblock": false, 00:09:40.879 "num_base_bdevs": 3, 00:09:40.879 "num_base_bdevs_discovered": 3, 00:09:40.879 "num_base_bdevs_operational": 3, 00:09:40.879 "base_bdevs_list": [ 00:09:40.879 { 00:09:40.879 "name": "BaseBdev1", 00:09:40.879 "uuid": "76fd85a2-5a5a-4734-a8ff-8c7891a1c6f9", 00:09:40.879 "is_configured": true, 00:09:40.879 "data_offset": 0, 00:09:40.879 "data_size": 65536 00:09:40.879 }, 00:09:40.879 { 00:09:40.879 "name": "BaseBdev2", 00:09:40.879 "uuid": "b2a924be-ab7b-4ca0-bdb0-7a34420d37e4", 00:09:40.879 "is_configured": true, 00:09:40.879 "data_offset": 0, 00:09:40.879 "data_size": 65536 00:09:40.879 }, 00:09:40.879 { 00:09:40.879 "name": "BaseBdev3", 00:09:40.879 "uuid": "96b33366-7457-49a1-9cff-44680e863cb0", 00:09:40.879 "is_configured": true, 00:09:40.879 "data_offset": 0, 00:09:40.879 "data_size": 65536 00:09:40.879 } 00:09:40.879 ] 00:09:40.879 } 00:09:40.879 } 00:09:40.879 }' 00:09:40.879 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:40.879 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:40.879 BaseBdev2 00:09:40.879 BaseBdev3' 00:09:40.879 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.138 18:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.138 [2024-11-26 18:59:07.712490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:41.138 [2024-11-26 18:59:07.712664] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.138 [2024-11-26 18:59:07.712768] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.396 "name": "Existed_Raid", 00:09:41.396 "uuid": "80dcb5a5-64fd-4b2f-9248-67df51ea1c8d", 00:09:41.396 "strip_size_kb": 64, 00:09:41.396 "state": "offline", 00:09:41.396 "raid_level": "concat", 00:09:41.396 "superblock": false, 00:09:41.396 "num_base_bdevs": 3, 00:09:41.396 "num_base_bdevs_discovered": 2, 00:09:41.396 "num_base_bdevs_operational": 2, 00:09:41.396 "base_bdevs_list": [ 00:09:41.396 { 00:09:41.396 "name": null, 00:09:41.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.396 "is_configured": false, 00:09:41.396 "data_offset": 0, 00:09:41.396 "data_size": 65536 00:09:41.396 }, 00:09:41.396 { 00:09:41.396 "name": "BaseBdev2", 00:09:41.396 "uuid": "b2a924be-ab7b-4ca0-bdb0-7a34420d37e4", 00:09:41.396 "is_configured": true, 00:09:41.396 "data_offset": 0, 00:09:41.396 "data_size": 65536 00:09:41.396 }, 00:09:41.396 { 00:09:41.396 "name": "BaseBdev3", 00:09:41.396 "uuid": "96b33366-7457-49a1-9cff-44680e863cb0", 00:09:41.396 "is_configured": true, 00:09:41.396 "data_offset": 0, 00:09:41.396 "data_size": 65536 00:09:41.396 } 00:09:41.396 ] 00:09:41.396 }' 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.396 18:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.961 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:41.961 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:41.961 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.961 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.961 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.961 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:41.961 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.961 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:41.961 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:41.961 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:41.961 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.961 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.961 [2024-11-26 18:59:08.384710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:41.962 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.962 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:41.962 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:41.962 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.962 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:41.962 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.962 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.962 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.962 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:41.962 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:41.962 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:41.962 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.962 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.962 [2024-11-26 18:59:08.542280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:41.962 [2024-11-26 18:59:08.542515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.219 BaseBdev2 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.219 [ 00:09:42.219 { 00:09:42.219 "name": "BaseBdev2", 00:09:42.219 "aliases": [ 00:09:42.219 "57ebc710-5a5a-4c55-8e79-71350f8eb8e2" 00:09:42.219 ], 00:09:42.219 "product_name": "Malloc disk", 00:09:42.219 "block_size": 512, 00:09:42.219 "num_blocks": 65536, 00:09:42.219 "uuid": "57ebc710-5a5a-4c55-8e79-71350f8eb8e2", 00:09:42.219 "assigned_rate_limits": { 00:09:42.219 "rw_ios_per_sec": 0, 00:09:42.219 "rw_mbytes_per_sec": 0, 00:09:42.219 "r_mbytes_per_sec": 0, 00:09:42.219 "w_mbytes_per_sec": 0 00:09:42.219 }, 00:09:42.219 "claimed": false, 00:09:42.219 "zoned": false, 00:09:42.219 "supported_io_types": { 00:09:42.219 "read": true, 00:09:42.219 "write": true, 00:09:42.219 "unmap": true, 00:09:42.219 "flush": true, 00:09:42.219 "reset": true, 00:09:42.219 "nvme_admin": false, 00:09:42.219 "nvme_io": false, 00:09:42.219 "nvme_io_md": false, 00:09:42.219 "write_zeroes": true, 00:09:42.219 "zcopy": true, 00:09:42.219 "get_zone_info": false, 00:09:42.219 "zone_management": false, 00:09:42.219 "zone_append": false, 00:09:42.219 "compare": false, 00:09:42.219 "compare_and_write": false, 00:09:42.219 "abort": true, 00:09:42.219 "seek_hole": false, 00:09:42.219 "seek_data": false, 00:09:42.219 "copy": true, 00:09:42.219 "nvme_iov_md": false 00:09:42.219 }, 00:09:42.219 "memory_domains": [ 00:09:42.219 { 00:09:42.219 "dma_device_id": "system", 00:09:42.219 "dma_device_type": 1 00:09:42.219 }, 00:09:42.219 { 00:09:42.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.219 "dma_device_type": 2 00:09:42.219 } 00:09:42.219 ], 00:09:42.219 "driver_specific": {} 00:09:42.219 } 00:09:42.219 ] 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.219 BaseBdev3 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:42.219 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:42.220 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:42.220 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:42.220 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:42.220 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:42.220 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:42.220 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.220 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.477 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.477 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:42.477 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.477 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.477 [ 00:09:42.477 { 00:09:42.477 "name": "BaseBdev3", 00:09:42.477 "aliases": [ 00:09:42.477 "11730c73-f5cc-4bb1-908c-a766ee75d072" 00:09:42.477 ], 00:09:42.477 "product_name": "Malloc disk", 00:09:42.477 "block_size": 512, 00:09:42.477 "num_blocks": 65536, 00:09:42.477 "uuid": "11730c73-f5cc-4bb1-908c-a766ee75d072", 00:09:42.477 "assigned_rate_limits": { 00:09:42.477 "rw_ios_per_sec": 0, 00:09:42.477 "rw_mbytes_per_sec": 0, 00:09:42.477 "r_mbytes_per_sec": 0, 00:09:42.477 "w_mbytes_per_sec": 0 00:09:42.477 }, 00:09:42.477 "claimed": false, 00:09:42.477 "zoned": false, 00:09:42.477 "supported_io_types": { 00:09:42.477 "read": true, 00:09:42.477 "write": true, 00:09:42.477 "unmap": true, 00:09:42.477 "flush": true, 00:09:42.477 "reset": true, 00:09:42.477 "nvme_admin": false, 00:09:42.477 "nvme_io": false, 00:09:42.477 "nvme_io_md": false, 00:09:42.477 "write_zeroes": true, 00:09:42.477 "zcopy": true, 00:09:42.477 "get_zone_info": false, 00:09:42.477 "zone_management": false, 00:09:42.477 "zone_append": false, 00:09:42.477 "compare": false, 00:09:42.477 "compare_and_write": false, 00:09:42.477 "abort": true, 00:09:42.477 "seek_hole": false, 00:09:42.477 "seek_data": false, 00:09:42.477 "copy": true, 00:09:42.477 "nvme_iov_md": false 00:09:42.477 }, 00:09:42.477 "memory_domains": [ 00:09:42.477 { 00:09:42.477 "dma_device_id": "system", 00:09:42.477 "dma_device_type": 1 00:09:42.477 }, 00:09:42.477 { 00:09:42.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.477 "dma_device_type": 2 00:09:42.477 } 00:09:42.477 ], 00:09:42.477 "driver_specific": {} 00:09:42.477 } 00:09:42.477 ] 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.478 [2024-11-26 18:59:08.870701] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:42.478 [2024-11-26 18:59:08.870877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:42.478 [2024-11-26 18:59:08.871043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:42.478 [2024-11-26 18:59:08.873774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.478 "name": "Existed_Raid", 00:09:42.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.478 "strip_size_kb": 64, 00:09:42.478 "state": "configuring", 00:09:42.478 "raid_level": "concat", 00:09:42.478 "superblock": false, 00:09:42.478 "num_base_bdevs": 3, 00:09:42.478 "num_base_bdevs_discovered": 2, 00:09:42.478 "num_base_bdevs_operational": 3, 00:09:42.478 "base_bdevs_list": [ 00:09:42.478 { 00:09:42.478 "name": "BaseBdev1", 00:09:42.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.478 "is_configured": false, 00:09:42.478 "data_offset": 0, 00:09:42.478 "data_size": 0 00:09:42.478 }, 00:09:42.478 { 00:09:42.478 "name": "BaseBdev2", 00:09:42.478 "uuid": "57ebc710-5a5a-4c55-8e79-71350f8eb8e2", 00:09:42.478 "is_configured": true, 00:09:42.478 "data_offset": 0, 00:09:42.478 "data_size": 65536 00:09:42.478 }, 00:09:42.478 { 00:09:42.478 "name": "BaseBdev3", 00:09:42.478 "uuid": "11730c73-f5cc-4bb1-908c-a766ee75d072", 00:09:42.478 "is_configured": true, 00:09:42.478 "data_offset": 0, 00:09:42.478 "data_size": 65536 00:09:42.478 } 00:09:42.478 ] 00:09:42.478 }' 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.478 18:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.065 18:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:43.065 18:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.065 18:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.065 [2024-11-26 18:59:09.438921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:43.065 18:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.065 18:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:43.065 18:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.065 18:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.065 18:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.065 18:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.065 18:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.065 18:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.065 18:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.065 18:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.065 18:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.065 18:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.065 18:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.065 18:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.065 18:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.065 18:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.065 18:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.065 "name": "Existed_Raid", 00:09:43.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.065 "strip_size_kb": 64, 00:09:43.065 "state": "configuring", 00:09:43.065 "raid_level": "concat", 00:09:43.065 "superblock": false, 00:09:43.065 "num_base_bdevs": 3, 00:09:43.065 "num_base_bdevs_discovered": 1, 00:09:43.065 "num_base_bdevs_operational": 3, 00:09:43.065 "base_bdevs_list": [ 00:09:43.065 { 00:09:43.065 "name": "BaseBdev1", 00:09:43.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.065 "is_configured": false, 00:09:43.065 "data_offset": 0, 00:09:43.065 "data_size": 0 00:09:43.065 }, 00:09:43.065 { 00:09:43.065 "name": null, 00:09:43.065 "uuid": "57ebc710-5a5a-4c55-8e79-71350f8eb8e2", 00:09:43.065 "is_configured": false, 00:09:43.065 "data_offset": 0, 00:09:43.065 "data_size": 65536 00:09:43.065 }, 00:09:43.065 { 00:09:43.065 "name": "BaseBdev3", 00:09:43.065 "uuid": "11730c73-f5cc-4bb1-908c-a766ee75d072", 00:09:43.065 "is_configured": true, 00:09:43.065 "data_offset": 0, 00:09:43.065 "data_size": 65536 00:09:43.065 } 00:09:43.065 ] 00:09:43.065 }' 00:09:43.065 18:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.065 18:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.632 18:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:43.632 18:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.632 18:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.632 18:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.632 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.633 BaseBdev1 00:09:43.633 [2024-11-26 18:59:10.086409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.633 [ 00:09:43.633 { 00:09:43.633 "name": "BaseBdev1", 00:09:43.633 "aliases": [ 00:09:43.633 "81a2db2a-b126-44c6-baf0-aecdd182d9f9" 00:09:43.633 ], 00:09:43.633 "product_name": "Malloc disk", 00:09:43.633 "block_size": 512, 00:09:43.633 "num_blocks": 65536, 00:09:43.633 "uuid": "81a2db2a-b126-44c6-baf0-aecdd182d9f9", 00:09:43.633 "assigned_rate_limits": { 00:09:43.633 "rw_ios_per_sec": 0, 00:09:43.633 "rw_mbytes_per_sec": 0, 00:09:43.633 "r_mbytes_per_sec": 0, 00:09:43.633 "w_mbytes_per_sec": 0 00:09:43.633 }, 00:09:43.633 "claimed": true, 00:09:43.633 "claim_type": "exclusive_write", 00:09:43.633 "zoned": false, 00:09:43.633 "supported_io_types": { 00:09:43.633 "read": true, 00:09:43.633 "write": true, 00:09:43.633 "unmap": true, 00:09:43.633 "flush": true, 00:09:43.633 "reset": true, 00:09:43.633 "nvme_admin": false, 00:09:43.633 "nvme_io": false, 00:09:43.633 "nvme_io_md": false, 00:09:43.633 "write_zeroes": true, 00:09:43.633 "zcopy": true, 00:09:43.633 "get_zone_info": false, 00:09:43.633 "zone_management": false, 00:09:43.633 "zone_append": false, 00:09:43.633 "compare": false, 00:09:43.633 "compare_and_write": false, 00:09:43.633 "abort": true, 00:09:43.633 "seek_hole": false, 00:09:43.633 "seek_data": false, 00:09:43.633 "copy": true, 00:09:43.633 "nvme_iov_md": false 00:09:43.633 }, 00:09:43.633 "memory_domains": [ 00:09:43.633 { 00:09:43.633 "dma_device_id": "system", 00:09:43.633 "dma_device_type": 1 00:09:43.633 }, 00:09:43.633 { 00:09:43.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.633 "dma_device_type": 2 00:09:43.633 } 00:09:43.633 ], 00:09:43.633 "driver_specific": {} 00:09:43.633 } 00:09:43.633 ] 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.633 "name": "Existed_Raid", 00:09:43.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.633 "strip_size_kb": 64, 00:09:43.633 "state": "configuring", 00:09:43.633 "raid_level": "concat", 00:09:43.633 "superblock": false, 00:09:43.633 "num_base_bdevs": 3, 00:09:43.633 "num_base_bdevs_discovered": 2, 00:09:43.633 "num_base_bdevs_operational": 3, 00:09:43.633 "base_bdevs_list": [ 00:09:43.633 { 00:09:43.633 "name": "BaseBdev1", 00:09:43.633 "uuid": "81a2db2a-b126-44c6-baf0-aecdd182d9f9", 00:09:43.633 "is_configured": true, 00:09:43.633 "data_offset": 0, 00:09:43.633 "data_size": 65536 00:09:43.633 }, 00:09:43.633 { 00:09:43.633 "name": null, 00:09:43.633 "uuid": "57ebc710-5a5a-4c55-8e79-71350f8eb8e2", 00:09:43.633 "is_configured": false, 00:09:43.633 "data_offset": 0, 00:09:43.633 "data_size": 65536 00:09:43.633 }, 00:09:43.633 { 00:09:43.633 "name": "BaseBdev3", 00:09:43.633 "uuid": "11730c73-f5cc-4bb1-908c-a766ee75d072", 00:09:43.633 "is_configured": true, 00:09:43.633 "data_offset": 0, 00:09:43.633 "data_size": 65536 00:09:43.633 } 00:09:43.633 ] 00:09:43.633 }' 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.633 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.201 [2024-11-26 18:59:10.702579] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.201 "name": "Existed_Raid", 00:09:44.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.201 "strip_size_kb": 64, 00:09:44.201 "state": "configuring", 00:09:44.201 "raid_level": "concat", 00:09:44.201 "superblock": false, 00:09:44.201 "num_base_bdevs": 3, 00:09:44.201 "num_base_bdevs_discovered": 1, 00:09:44.201 "num_base_bdevs_operational": 3, 00:09:44.201 "base_bdevs_list": [ 00:09:44.201 { 00:09:44.201 "name": "BaseBdev1", 00:09:44.201 "uuid": "81a2db2a-b126-44c6-baf0-aecdd182d9f9", 00:09:44.201 "is_configured": true, 00:09:44.201 "data_offset": 0, 00:09:44.201 "data_size": 65536 00:09:44.201 }, 00:09:44.201 { 00:09:44.201 "name": null, 00:09:44.201 "uuid": "57ebc710-5a5a-4c55-8e79-71350f8eb8e2", 00:09:44.201 "is_configured": false, 00:09:44.201 "data_offset": 0, 00:09:44.201 "data_size": 65536 00:09:44.201 }, 00:09:44.201 { 00:09:44.201 "name": null, 00:09:44.201 "uuid": "11730c73-f5cc-4bb1-908c-a766ee75d072", 00:09:44.201 "is_configured": false, 00:09:44.201 "data_offset": 0, 00:09:44.201 "data_size": 65536 00:09:44.201 } 00:09:44.201 ] 00:09:44.201 }' 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.201 18:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.767 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:44.767 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.767 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.767 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.767 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.767 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:44.767 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:44.768 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.768 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.768 [2024-11-26 18:59:11.310811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:44.768 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.768 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.768 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.768 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.768 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.768 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.768 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.768 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.768 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.768 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.768 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.768 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.768 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.768 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.768 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.768 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.768 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.768 "name": "Existed_Raid", 00:09:44.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.768 "strip_size_kb": 64, 00:09:44.768 "state": "configuring", 00:09:44.768 "raid_level": "concat", 00:09:44.768 "superblock": false, 00:09:44.768 "num_base_bdevs": 3, 00:09:44.768 "num_base_bdevs_discovered": 2, 00:09:44.768 "num_base_bdevs_operational": 3, 00:09:44.768 "base_bdevs_list": [ 00:09:44.768 { 00:09:44.768 "name": "BaseBdev1", 00:09:44.768 "uuid": "81a2db2a-b126-44c6-baf0-aecdd182d9f9", 00:09:44.768 "is_configured": true, 00:09:44.768 "data_offset": 0, 00:09:44.768 "data_size": 65536 00:09:44.768 }, 00:09:44.768 { 00:09:44.768 "name": null, 00:09:44.768 "uuid": "57ebc710-5a5a-4c55-8e79-71350f8eb8e2", 00:09:44.768 "is_configured": false, 00:09:44.768 "data_offset": 0, 00:09:44.768 "data_size": 65536 00:09:44.768 }, 00:09:44.768 { 00:09:44.768 "name": "BaseBdev3", 00:09:44.768 "uuid": "11730c73-f5cc-4bb1-908c-a766ee75d072", 00:09:44.768 "is_configured": true, 00:09:44.768 "data_offset": 0, 00:09:44.768 "data_size": 65536 00:09:44.768 } 00:09:44.768 ] 00:09:44.768 }' 00:09:44.768 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.768 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.335 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.335 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:45.335 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.335 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.335 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.335 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:45.335 18:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:45.335 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.335 18:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.335 [2024-11-26 18:59:11.919033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:45.594 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.594 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:45.594 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.594 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.594 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.594 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.594 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.594 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.594 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.594 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.594 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.594 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.594 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.594 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.594 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.594 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.594 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.594 "name": "Existed_Raid", 00:09:45.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.594 "strip_size_kb": 64, 00:09:45.594 "state": "configuring", 00:09:45.594 "raid_level": "concat", 00:09:45.594 "superblock": false, 00:09:45.594 "num_base_bdevs": 3, 00:09:45.594 "num_base_bdevs_discovered": 1, 00:09:45.594 "num_base_bdevs_operational": 3, 00:09:45.594 "base_bdevs_list": [ 00:09:45.594 { 00:09:45.594 "name": null, 00:09:45.594 "uuid": "81a2db2a-b126-44c6-baf0-aecdd182d9f9", 00:09:45.594 "is_configured": false, 00:09:45.594 "data_offset": 0, 00:09:45.594 "data_size": 65536 00:09:45.594 }, 00:09:45.594 { 00:09:45.594 "name": null, 00:09:45.594 "uuid": "57ebc710-5a5a-4c55-8e79-71350f8eb8e2", 00:09:45.594 "is_configured": false, 00:09:45.594 "data_offset": 0, 00:09:45.594 "data_size": 65536 00:09:45.594 }, 00:09:45.594 { 00:09:45.594 "name": "BaseBdev3", 00:09:45.594 "uuid": "11730c73-f5cc-4bb1-908c-a766ee75d072", 00:09:45.594 "is_configured": true, 00:09:45.594 "data_offset": 0, 00:09:45.594 "data_size": 65536 00:09:45.594 } 00:09:45.594 ] 00:09:45.594 }' 00:09:45.594 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.594 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.162 [2024-11-26 18:59:12.605075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.162 "name": "Existed_Raid", 00:09:46.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.162 "strip_size_kb": 64, 00:09:46.162 "state": "configuring", 00:09:46.162 "raid_level": "concat", 00:09:46.162 "superblock": false, 00:09:46.162 "num_base_bdevs": 3, 00:09:46.162 "num_base_bdevs_discovered": 2, 00:09:46.162 "num_base_bdevs_operational": 3, 00:09:46.162 "base_bdevs_list": [ 00:09:46.162 { 00:09:46.162 "name": null, 00:09:46.162 "uuid": "81a2db2a-b126-44c6-baf0-aecdd182d9f9", 00:09:46.162 "is_configured": false, 00:09:46.162 "data_offset": 0, 00:09:46.162 "data_size": 65536 00:09:46.162 }, 00:09:46.162 { 00:09:46.162 "name": "BaseBdev2", 00:09:46.162 "uuid": "57ebc710-5a5a-4c55-8e79-71350f8eb8e2", 00:09:46.162 "is_configured": true, 00:09:46.162 "data_offset": 0, 00:09:46.162 "data_size": 65536 00:09:46.162 }, 00:09:46.162 { 00:09:46.162 "name": "BaseBdev3", 00:09:46.162 "uuid": "11730c73-f5cc-4bb1-908c-a766ee75d072", 00:09:46.162 "is_configured": true, 00:09:46.162 "data_offset": 0, 00:09:46.162 "data_size": 65536 00:09:46.162 } 00:09:46.162 ] 00:09:46.162 }' 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.162 18:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.729 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.729 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.729 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 81a2db2a-b126-44c6-baf0-aecdd182d9f9 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.730 [2024-11-26 18:59:13.293755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:46.730 NewBaseBdev 00:09:46.730 [2024-11-26 18:59:13.293962] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:46.730 [2024-11-26 18:59:13.294003] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:46.730 [2024-11-26 18:59:13.294370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:46.730 [2024-11-26 18:59:13.294580] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:46.730 [2024-11-26 18:59:13.294596] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:46.730 [2024-11-26 18:59:13.294921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.730 [ 00:09:46.730 { 00:09:46.730 "name": "NewBaseBdev", 00:09:46.730 "aliases": [ 00:09:46.730 "81a2db2a-b126-44c6-baf0-aecdd182d9f9" 00:09:46.730 ], 00:09:46.730 "product_name": "Malloc disk", 00:09:46.730 "block_size": 512, 00:09:46.730 "num_blocks": 65536, 00:09:46.730 "uuid": "81a2db2a-b126-44c6-baf0-aecdd182d9f9", 00:09:46.730 "assigned_rate_limits": { 00:09:46.730 "rw_ios_per_sec": 0, 00:09:46.730 "rw_mbytes_per_sec": 0, 00:09:46.730 "r_mbytes_per_sec": 0, 00:09:46.730 "w_mbytes_per_sec": 0 00:09:46.730 }, 00:09:46.730 "claimed": true, 00:09:46.730 "claim_type": "exclusive_write", 00:09:46.730 "zoned": false, 00:09:46.730 "supported_io_types": { 00:09:46.730 "read": true, 00:09:46.730 "write": true, 00:09:46.730 "unmap": true, 00:09:46.730 "flush": true, 00:09:46.730 "reset": true, 00:09:46.730 "nvme_admin": false, 00:09:46.730 "nvme_io": false, 00:09:46.730 "nvme_io_md": false, 00:09:46.730 "write_zeroes": true, 00:09:46.730 "zcopy": true, 00:09:46.730 "get_zone_info": false, 00:09:46.730 "zone_management": false, 00:09:46.730 "zone_append": false, 00:09:46.730 "compare": false, 00:09:46.730 "compare_and_write": false, 00:09:46.730 "abort": true, 00:09:46.730 "seek_hole": false, 00:09:46.730 "seek_data": false, 00:09:46.730 "copy": true, 00:09:46.730 "nvme_iov_md": false 00:09:46.730 }, 00:09:46.730 "memory_domains": [ 00:09:46.730 { 00:09:46.730 "dma_device_id": "system", 00:09:46.730 "dma_device_type": 1 00:09:46.730 }, 00:09:46.730 { 00:09:46.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.730 "dma_device_type": 2 00:09:46.730 } 00:09:46.730 ], 00:09:46.730 "driver_specific": {} 00:09:46.730 } 00:09:46.730 ] 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.730 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.989 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.989 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.989 "name": "Existed_Raid", 00:09:46.989 "uuid": "b4846105-126c-4393-9a7c-2739e0c2f84b", 00:09:46.989 "strip_size_kb": 64, 00:09:46.989 "state": "online", 00:09:46.989 "raid_level": "concat", 00:09:46.989 "superblock": false, 00:09:46.989 "num_base_bdevs": 3, 00:09:46.989 "num_base_bdevs_discovered": 3, 00:09:46.989 "num_base_bdevs_operational": 3, 00:09:46.989 "base_bdevs_list": [ 00:09:46.989 { 00:09:46.989 "name": "NewBaseBdev", 00:09:46.989 "uuid": "81a2db2a-b126-44c6-baf0-aecdd182d9f9", 00:09:46.989 "is_configured": true, 00:09:46.989 "data_offset": 0, 00:09:46.989 "data_size": 65536 00:09:46.989 }, 00:09:46.989 { 00:09:46.989 "name": "BaseBdev2", 00:09:46.989 "uuid": "57ebc710-5a5a-4c55-8e79-71350f8eb8e2", 00:09:46.989 "is_configured": true, 00:09:46.989 "data_offset": 0, 00:09:46.989 "data_size": 65536 00:09:46.989 }, 00:09:46.989 { 00:09:46.989 "name": "BaseBdev3", 00:09:46.989 "uuid": "11730c73-f5cc-4bb1-908c-a766ee75d072", 00:09:46.989 "is_configured": true, 00:09:46.989 "data_offset": 0, 00:09:46.989 "data_size": 65536 00:09:46.989 } 00:09:46.989 ] 00:09:46.989 }' 00:09:46.989 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.989 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.555 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:47.555 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:47.555 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:47.555 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:47.555 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:47.555 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:47.555 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:47.555 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:47.555 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.555 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.555 [2024-11-26 18:59:13.890459] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.555 18:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.555 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.555 "name": "Existed_Raid", 00:09:47.555 "aliases": [ 00:09:47.555 "b4846105-126c-4393-9a7c-2739e0c2f84b" 00:09:47.555 ], 00:09:47.555 "product_name": "Raid Volume", 00:09:47.555 "block_size": 512, 00:09:47.555 "num_blocks": 196608, 00:09:47.555 "uuid": "b4846105-126c-4393-9a7c-2739e0c2f84b", 00:09:47.555 "assigned_rate_limits": { 00:09:47.555 "rw_ios_per_sec": 0, 00:09:47.555 "rw_mbytes_per_sec": 0, 00:09:47.555 "r_mbytes_per_sec": 0, 00:09:47.555 "w_mbytes_per_sec": 0 00:09:47.555 }, 00:09:47.555 "claimed": false, 00:09:47.555 "zoned": false, 00:09:47.555 "supported_io_types": { 00:09:47.555 "read": true, 00:09:47.556 "write": true, 00:09:47.556 "unmap": true, 00:09:47.556 "flush": true, 00:09:47.556 "reset": true, 00:09:47.556 "nvme_admin": false, 00:09:47.556 "nvme_io": false, 00:09:47.556 "nvme_io_md": false, 00:09:47.556 "write_zeroes": true, 00:09:47.556 "zcopy": false, 00:09:47.556 "get_zone_info": false, 00:09:47.556 "zone_management": false, 00:09:47.556 "zone_append": false, 00:09:47.556 "compare": false, 00:09:47.556 "compare_and_write": false, 00:09:47.556 "abort": false, 00:09:47.556 "seek_hole": false, 00:09:47.556 "seek_data": false, 00:09:47.556 "copy": false, 00:09:47.556 "nvme_iov_md": false 00:09:47.556 }, 00:09:47.556 "memory_domains": [ 00:09:47.556 { 00:09:47.556 "dma_device_id": "system", 00:09:47.556 "dma_device_type": 1 00:09:47.556 }, 00:09:47.556 { 00:09:47.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.556 "dma_device_type": 2 00:09:47.556 }, 00:09:47.556 { 00:09:47.556 "dma_device_id": "system", 00:09:47.556 "dma_device_type": 1 00:09:47.556 }, 00:09:47.556 { 00:09:47.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.556 "dma_device_type": 2 00:09:47.556 }, 00:09:47.556 { 00:09:47.556 "dma_device_id": "system", 00:09:47.556 "dma_device_type": 1 00:09:47.556 }, 00:09:47.556 { 00:09:47.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.556 "dma_device_type": 2 00:09:47.556 } 00:09:47.556 ], 00:09:47.556 "driver_specific": { 00:09:47.556 "raid": { 00:09:47.556 "uuid": "b4846105-126c-4393-9a7c-2739e0c2f84b", 00:09:47.556 "strip_size_kb": 64, 00:09:47.556 "state": "online", 00:09:47.556 "raid_level": "concat", 00:09:47.556 "superblock": false, 00:09:47.556 "num_base_bdevs": 3, 00:09:47.556 "num_base_bdevs_discovered": 3, 00:09:47.556 "num_base_bdevs_operational": 3, 00:09:47.556 "base_bdevs_list": [ 00:09:47.556 { 00:09:47.556 "name": "NewBaseBdev", 00:09:47.556 "uuid": "81a2db2a-b126-44c6-baf0-aecdd182d9f9", 00:09:47.556 "is_configured": true, 00:09:47.556 "data_offset": 0, 00:09:47.556 "data_size": 65536 00:09:47.556 }, 00:09:47.556 { 00:09:47.556 "name": "BaseBdev2", 00:09:47.556 "uuid": "57ebc710-5a5a-4c55-8e79-71350f8eb8e2", 00:09:47.556 "is_configured": true, 00:09:47.556 "data_offset": 0, 00:09:47.556 "data_size": 65536 00:09:47.556 }, 00:09:47.556 { 00:09:47.556 "name": "BaseBdev3", 00:09:47.556 "uuid": "11730c73-f5cc-4bb1-908c-a766ee75d072", 00:09:47.556 "is_configured": true, 00:09:47.556 "data_offset": 0, 00:09:47.556 "data_size": 65536 00:09:47.556 } 00:09:47.556 ] 00:09:47.556 } 00:09:47.556 } 00:09:47.556 }' 00:09:47.556 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.556 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:47.556 BaseBdev2 00:09:47.556 BaseBdev3' 00:09:47.556 18:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.556 18:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.815 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.815 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.815 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:47.815 18:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.815 18:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.815 [2024-11-26 18:59:14.214098] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.815 [2024-11-26 18:59:14.214254] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.815 [2024-11-26 18:59:14.214417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.815 [2024-11-26 18:59:14.214497] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.815 [2024-11-26 18:59:14.214518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:47.815 18:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.815 18:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65968 00:09:47.815 18:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65968 ']' 00:09:47.815 18:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65968 00:09:47.815 18:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:47.815 18:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.815 18:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65968 00:09:47.815 18:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.815 18:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.815 18:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65968' 00:09:47.815 killing process with pid 65968 00:09:47.815 18:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65968 00:09:47.815 [2024-11-26 18:59:14.259688] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.815 18:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65968 00:09:48.074 [2024-11-26 18:59:14.554289] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:49.449 00:09:49.449 real 0m12.434s 00:09:49.449 user 0m20.527s 00:09:49.449 sys 0m1.727s 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.449 ************************************ 00:09:49.449 END TEST raid_state_function_test 00:09:49.449 ************************************ 00:09:49.449 18:59:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:49.449 18:59:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:49.449 18:59:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.449 18:59:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.449 ************************************ 00:09:49.449 START TEST raid_state_function_test_sb 00:09:49.449 ************************************ 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:49.449 Process raid pid: 66611 00:09:49.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66611 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66611' 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66611 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66611 ']' 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.449 18:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.449 [2024-11-26 18:59:15.869482] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:09:49.449 [2024-11-26 18:59:15.869939] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.449 [2024-11-26 18:59:16.048152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.709 [2024-11-26 18:59:16.207823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.968 [2024-11-26 18:59:16.444553] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.968 [2024-11-26 18:59:16.444943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.536 18:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.536 18:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:50.536 18:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:50.536 18:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.536 18:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.536 [2024-11-26 18:59:16.893321] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:50.536 [2024-11-26 18:59:16.893523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:50.536 [2024-11-26 18:59:16.893553] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:50.536 [2024-11-26 18:59:16.893590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:50.536 [2024-11-26 18:59:16.893601] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:50.536 [2024-11-26 18:59:16.893617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:50.536 18:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.536 18:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:50.536 18:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.536 18:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.536 18:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.536 18:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.536 18:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.536 18:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.536 18:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.536 18:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.536 18:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.536 18:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.536 18:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.536 18:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.536 18:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.536 18:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.536 18:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.536 "name": "Existed_Raid", 00:09:50.537 "uuid": "4bc0d93c-c84e-461b-a01c-c28576dbe69a", 00:09:50.537 "strip_size_kb": 64, 00:09:50.537 "state": "configuring", 00:09:50.537 "raid_level": "concat", 00:09:50.537 "superblock": true, 00:09:50.537 "num_base_bdevs": 3, 00:09:50.537 "num_base_bdevs_discovered": 0, 00:09:50.537 "num_base_bdevs_operational": 3, 00:09:50.537 "base_bdevs_list": [ 00:09:50.537 { 00:09:50.537 "name": "BaseBdev1", 00:09:50.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.537 "is_configured": false, 00:09:50.537 "data_offset": 0, 00:09:50.537 "data_size": 0 00:09:50.537 }, 00:09:50.537 { 00:09:50.537 "name": "BaseBdev2", 00:09:50.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.537 "is_configured": false, 00:09:50.537 "data_offset": 0, 00:09:50.537 "data_size": 0 00:09:50.537 }, 00:09:50.537 { 00:09:50.537 "name": "BaseBdev3", 00:09:50.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.537 "is_configured": false, 00:09:50.537 "data_offset": 0, 00:09:50.537 "data_size": 0 00:09:50.537 } 00:09:50.537 ] 00:09:50.537 }' 00:09:50.537 18:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.537 18:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.795 18:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:50.795 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.795 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.795 [2024-11-26 18:59:17.389392] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:50.795 [2024-11-26 18:59:17.389442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:50.795 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.795 18:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:50.795 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.795 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.795 [2024-11-26 18:59:17.397390] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:50.795 [2024-11-26 18:59:17.397487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:50.795 [2024-11-26 18:59:17.397636] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:50.795 [2024-11-26 18:59:17.397696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:50.795 [2024-11-26 18:59:17.397734] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:50.795 [2024-11-26 18:59:17.397878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:50.795 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.795 18:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:50.795 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.795 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.054 [2024-11-26 18:59:17.447066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.054 BaseBdev1 00:09:51.054 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.054 18:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:51.054 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:51.054 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.054 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:51.054 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.054 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.054 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.054 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.054 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.054 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.054 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:51.054 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.054 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.054 [ 00:09:51.054 { 00:09:51.054 "name": "BaseBdev1", 00:09:51.054 "aliases": [ 00:09:51.054 "a1f18dad-11eb-417b-8b37-83a044b8bb09" 00:09:51.054 ], 00:09:51.054 "product_name": "Malloc disk", 00:09:51.054 "block_size": 512, 00:09:51.054 "num_blocks": 65536, 00:09:51.054 "uuid": "a1f18dad-11eb-417b-8b37-83a044b8bb09", 00:09:51.054 "assigned_rate_limits": { 00:09:51.054 "rw_ios_per_sec": 0, 00:09:51.054 "rw_mbytes_per_sec": 0, 00:09:51.054 "r_mbytes_per_sec": 0, 00:09:51.054 "w_mbytes_per_sec": 0 00:09:51.054 }, 00:09:51.054 "claimed": true, 00:09:51.054 "claim_type": "exclusive_write", 00:09:51.054 "zoned": false, 00:09:51.054 "supported_io_types": { 00:09:51.054 "read": true, 00:09:51.054 "write": true, 00:09:51.054 "unmap": true, 00:09:51.054 "flush": true, 00:09:51.054 "reset": true, 00:09:51.054 "nvme_admin": false, 00:09:51.054 "nvme_io": false, 00:09:51.054 "nvme_io_md": false, 00:09:51.054 "write_zeroes": true, 00:09:51.054 "zcopy": true, 00:09:51.054 "get_zone_info": false, 00:09:51.054 "zone_management": false, 00:09:51.054 "zone_append": false, 00:09:51.054 "compare": false, 00:09:51.055 "compare_and_write": false, 00:09:51.055 "abort": true, 00:09:51.055 "seek_hole": false, 00:09:51.055 "seek_data": false, 00:09:51.055 "copy": true, 00:09:51.055 "nvme_iov_md": false 00:09:51.055 }, 00:09:51.055 "memory_domains": [ 00:09:51.055 { 00:09:51.055 "dma_device_id": "system", 00:09:51.055 "dma_device_type": 1 00:09:51.055 }, 00:09:51.055 { 00:09:51.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.055 "dma_device_type": 2 00:09:51.055 } 00:09:51.055 ], 00:09:51.055 "driver_specific": {} 00:09:51.055 } 00:09:51.055 ] 00:09:51.055 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.055 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:51.055 18:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:51.055 18:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.055 18:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.055 18:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.055 18:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.055 18:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.055 18:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.055 18:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.055 18:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.055 18:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.055 18:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.055 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.055 18:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.055 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.055 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.055 18:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.055 "name": "Existed_Raid", 00:09:51.055 "uuid": "a668fe6b-e559-496d-b4fe-8e482ddc06b1", 00:09:51.055 "strip_size_kb": 64, 00:09:51.055 "state": "configuring", 00:09:51.055 "raid_level": "concat", 00:09:51.055 "superblock": true, 00:09:51.055 "num_base_bdevs": 3, 00:09:51.055 "num_base_bdevs_discovered": 1, 00:09:51.055 "num_base_bdevs_operational": 3, 00:09:51.055 "base_bdevs_list": [ 00:09:51.055 { 00:09:51.055 "name": "BaseBdev1", 00:09:51.055 "uuid": "a1f18dad-11eb-417b-8b37-83a044b8bb09", 00:09:51.055 "is_configured": true, 00:09:51.055 "data_offset": 2048, 00:09:51.055 "data_size": 63488 00:09:51.055 }, 00:09:51.055 { 00:09:51.055 "name": "BaseBdev2", 00:09:51.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.055 "is_configured": false, 00:09:51.055 "data_offset": 0, 00:09:51.055 "data_size": 0 00:09:51.055 }, 00:09:51.055 { 00:09:51.055 "name": "BaseBdev3", 00:09:51.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.055 "is_configured": false, 00:09:51.055 "data_offset": 0, 00:09:51.055 "data_size": 0 00:09:51.055 } 00:09:51.055 ] 00:09:51.055 }' 00:09:51.055 18:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.055 18:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.622 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:51.622 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.622 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.622 [2024-11-26 18:59:18.007298] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:51.622 [2024-11-26 18:59:18.007374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:51.622 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.623 [2024-11-26 18:59:18.015326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.623 [2024-11-26 18:59:18.018089] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:51.623 [2024-11-26 18:59:18.018265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:51.623 [2024-11-26 18:59:18.018418] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:51.623 [2024-11-26 18:59:18.018559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.623 "name": "Existed_Raid", 00:09:51.623 "uuid": "2e6c5e51-d41d-49ce-8fbf-cf956ccc0764", 00:09:51.623 "strip_size_kb": 64, 00:09:51.623 "state": "configuring", 00:09:51.623 "raid_level": "concat", 00:09:51.623 "superblock": true, 00:09:51.623 "num_base_bdevs": 3, 00:09:51.623 "num_base_bdevs_discovered": 1, 00:09:51.623 "num_base_bdevs_operational": 3, 00:09:51.623 "base_bdevs_list": [ 00:09:51.623 { 00:09:51.623 "name": "BaseBdev1", 00:09:51.623 "uuid": "a1f18dad-11eb-417b-8b37-83a044b8bb09", 00:09:51.623 "is_configured": true, 00:09:51.623 "data_offset": 2048, 00:09:51.623 "data_size": 63488 00:09:51.623 }, 00:09:51.623 { 00:09:51.623 "name": "BaseBdev2", 00:09:51.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.623 "is_configured": false, 00:09:51.623 "data_offset": 0, 00:09:51.623 "data_size": 0 00:09:51.623 }, 00:09:51.623 { 00:09:51.623 "name": "BaseBdev3", 00:09:51.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.623 "is_configured": false, 00:09:51.623 "data_offset": 0, 00:09:51.623 "data_size": 0 00:09:51.623 } 00:09:51.623 ] 00:09:51.623 }' 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.623 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.190 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.191 [2024-11-26 18:59:18.590781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.191 BaseBdev2 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.191 [ 00:09:52.191 { 00:09:52.191 "name": "BaseBdev2", 00:09:52.191 "aliases": [ 00:09:52.191 "654a6528-6be9-40be-a79d-8bc92ef1cdfc" 00:09:52.191 ], 00:09:52.191 "product_name": "Malloc disk", 00:09:52.191 "block_size": 512, 00:09:52.191 "num_blocks": 65536, 00:09:52.191 "uuid": "654a6528-6be9-40be-a79d-8bc92ef1cdfc", 00:09:52.191 "assigned_rate_limits": { 00:09:52.191 "rw_ios_per_sec": 0, 00:09:52.191 "rw_mbytes_per_sec": 0, 00:09:52.191 "r_mbytes_per_sec": 0, 00:09:52.191 "w_mbytes_per_sec": 0 00:09:52.191 }, 00:09:52.191 "claimed": true, 00:09:52.191 "claim_type": "exclusive_write", 00:09:52.191 "zoned": false, 00:09:52.191 "supported_io_types": { 00:09:52.191 "read": true, 00:09:52.191 "write": true, 00:09:52.191 "unmap": true, 00:09:52.191 "flush": true, 00:09:52.191 "reset": true, 00:09:52.191 "nvme_admin": false, 00:09:52.191 "nvme_io": false, 00:09:52.191 "nvme_io_md": false, 00:09:52.191 "write_zeroes": true, 00:09:52.191 "zcopy": true, 00:09:52.191 "get_zone_info": false, 00:09:52.191 "zone_management": false, 00:09:52.191 "zone_append": false, 00:09:52.191 "compare": false, 00:09:52.191 "compare_and_write": false, 00:09:52.191 "abort": true, 00:09:52.191 "seek_hole": false, 00:09:52.191 "seek_data": false, 00:09:52.191 "copy": true, 00:09:52.191 "nvme_iov_md": false 00:09:52.191 }, 00:09:52.191 "memory_domains": [ 00:09:52.191 { 00:09:52.191 "dma_device_id": "system", 00:09:52.191 "dma_device_type": 1 00:09:52.191 }, 00:09:52.191 { 00:09:52.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.191 "dma_device_type": 2 00:09:52.191 } 00:09:52.191 ], 00:09:52.191 "driver_specific": {} 00:09:52.191 } 00:09:52.191 ] 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.191 "name": "Existed_Raid", 00:09:52.191 "uuid": "2e6c5e51-d41d-49ce-8fbf-cf956ccc0764", 00:09:52.191 "strip_size_kb": 64, 00:09:52.191 "state": "configuring", 00:09:52.191 "raid_level": "concat", 00:09:52.191 "superblock": true, 00:09:52.191 "num_base_bdevs": 3, 00:09:52.191 "num_base_bdevs_discovered": 2, 00:09:52.191 "num_base_bdevs_operational": 3, 00:09:52.191 "base_bdevs_list": [ 00:09:52.191 { 00:09:52.191 "name": "BaseBdev1", 00:09:52.191 "uuid": "a1f18dad-11eb-417b-8b37-83a044b8bb09", 00:09:52.191 "is_configured": true, 00:09:52.191 "data_offset": 2048, 00:09:52.191 "data_size": 63488 00:09:52.191 }, 00:09:52.191 { 00:09:52.191 "name": "BaseBdev2", 00:09:52.191 "uuid": "654a6528-6be9-40be-a79d-8bc92ef1cdfc", 00:09:52.191 "is_configured": true, 00:09:52.191 "data_offset": 2048, 00:09:52.191 "data_size": 63488 00:09:52.191 }, 00:09:52.191 { 00:09:52.191 "name": "BaseBdev3", 00:09:52.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.191 "is_configured": false, 00:09:52.191 "data_offset": 0, 00:09:52.191 "data_size": 0 00:09:52.191 } 00:09:52.191 ] 00:09:52.191 }' 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.191 18:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.760 [2024-11-26 18:59:19.186482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.760 [2024-11-26 18:59:19.187055] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:52.760 BaseBdev3 00:09:52.760 [2024-11-26 18:59:19.187239] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:52.760 [2024-11-26 18:59:19.187701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:52.760 [2024-11-26 18:59:19.187969] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:52.760 [2024-11-26 18:59:19.187991] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.760 [2024-11-26 18:59:19.188218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.760 [ 00:09:52.760 { 00:09:52.760 "name": "BaseBdev3", 00:09:52.760 "aliases": [ 00:09:52.760 "8fb0b7f7-7511-46fb-9cb3-15e974b34f40" 00:09:52.760 ], 00:09:52.760 "product_name": "Malloc disk", 00:09:52.760 "block_size": 512, 00:09:52.760 "num_blocks": 65536, 00:09:52.760 "uuid": "8fb0b7f7-7511-46fb-9cb3-15e974b34f40", 00:09:52.760 "assigned_rate_limits": { 00:09:52.760 "rw_ios_per_sec": 0, 00:09:52.760 "rw_mbytes_per_sec": 0, 00:09:52.760 "r_mbytes_per_sec": 0, 00:09:52.760 "w_mbytes_per_sec": 0 00:09:52.760 }, 00:09:52.760 "claimed": true, 00:09:52.760 "claim_type": "exclusive_write", 00:09:52.760 "zoned": false, 00:09:52.760 "supported_io_types": { 00:09:52.760 "read": true, 00:09:52.760 "write": true, 00:09:52.760 "unmap": true, 00:09:52.760 "flush": true, 00:09:52.760 "reset": true, 00:09:52.760 "nvme_admin": false, 00:09:52.760 "nvme_io": false, 00:09:52.760 "nvme_io_md": false, 00:09:52.760 "write_zeroes": true, 00:09:52.760 "zcopy": true, 00:09:52.760 "get_zone_info": false, 00:09:52.760 "zone_management": false, 00:09:52.760 "zone_append": false, 00:09:52.760 "compare": false, 00:09:52.760 "compare_and_write": false, 00:09:52.760 "abort": true, 00:09:52.760 "seek_hole": false, 00:09:52.760 "seek_data": false, 00:09:52.760 "copy": true, 00:09:52.760 "nvme_iov_md": false 00:09:52.760 }, 00:09:52.760 "memory_domains": [ 00:09:52.760 { 00:09:52.760 "dma_device_id": "system", 00:09:52.760 "dma_device_type": 1 00:09:52.760 }, 00:09:52.760 { 00:09:52.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.760 "dma_device_type": 2 00:09:52.760 } 00:09:52.760 ], 00:09:52.760 "driver_specific": {} 00:09:52.760 } 00:09:52.760 ] 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.760 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.760 "name": "Existed_Raid", 00:09:52.760 "uuid": "2e6c5e51-d41d-49ce-8fbf-cf956ccc0764", 00:09:52.760 "strip_size_kb": 64, 00:09:52.760 "state": "online", 00:09:52.760 "raid_level": "concat", 00:09:52.760 "superblock": true, 00:09:52.760 "num_base_bdevs": 3, 00:09:52.761 "num_base_bdevs_discovered": 3, 00:09:52.761 "num_base_bdevs_operational": 3, 00:09:52.761 "base_bdevs_list": [ 00:09:52.761 { 00:09:52.761 "name": "BaseBdev1", 00:09:52.761 "uuid": "a1f18dad-11eb-417b-8b37-83a044b8bb09", 00:09:52.761 "is_configured": true, 00:09:52.761 "data_offset": 2048, 00:09:52.761 "data_size": 63488 00:09:52.761 }, 00:09:52.761 { 00:09:52.761 "name": "BaseBdev2", 00:09:52.761 "uuid": "654a6528-6be9-40be-a79d-8bc92ef1cdfc", 00:09:52.761 "is_configured": true, 00:09:52.761 "data_offset": 2048, 00:09:52.761 "data_size": 63488 00:09:52.761 }, 00:09:52.761 { 00:09:52.761 "name": "BaseBdev3", 00:09:52.761 "uuid": "8fb0b7f7-7511-46fb-9cb3-15e974b34f40", 00:09:52.761 "is_configured": true, 00:09:52.761 "data_offset": 2048, 00:09:52.761 "data_size": 63488 00:09:52.761 } 00:09:52.761 ] 00:09:52.761 }' 00:09:52.761 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.761 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:53.328 [2024-11-26 18:59:19.747120] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:53.328 "name": "Existed_Raid", 00:09:53.328 "aliases": [ 00:09:53.328 "2e6c5e51-d41d-49ce-8fbf-cf956ccc0764" 00:09:53.328 ], 00:09:53.328 "product_name": "Raid Volume", 00:09:53.328 "block_size": 512, 00:09:53.328 "num_blocks": 190464, 00:09:53.328 "uuid": "2e6c5e51-d41d-49ce-8fbf-cf956ccc0764", 00:09:53.328 "assigned_rate_limits": { 00:09:53.328 "rw_ios_per_sec": 0, 00:09:53.328 "rw_mbytes_per_sec": 0, 00:09:53.328 "r_mbytes_per_sec": 0, 00:09:53.328 "w_mbytes_per_sec": 0 00:09:53.328 }, 00:09:53.328 "claimed": false, 00:09:53.328 "zoned": false, 00:09:53.328 "supported_io_types": { 00:09:53.328 "read": true, 00:09:53.328 "write": true, 00:09:53.328 "unmap": true, 00:09:53.328 "flush": true, 00:09:53.328 "reset": true, 00:09:53.328 "nvme_admin": false, 00:09:53.328 "nvme_io": false, 00:09:53.328 "nvme_io_md": false, 00:09:53.328 "write_zeroes": true, 00:09:53.328 "zcopy": false, 00:09:53.328 "get_zone_info": false, 00:09:53.328 "zone_management": false, 00:09:53.328 "zone_append": false, 00:09:53.328 "compare": false, 00:09:53.328 "compare_and_write": false, 00:09:53.328 "abort": false, 00:09:53.328 "seek_hole": false, 00:09:53.328 "seek_data": false, 00:09:53.328 "copy": false, 00:09:53.328 "nvme_iov_md": false 00:09:53.328 }, 00:09:53.328 "memory_domains": [ 00:09:53.328 { 00:09:53.328 "dma_device_id": "system", 00:09:53.328 "dma_device_type": 1 00:09:53.328 }, 00:09:53.328 { 00:09:53.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.328 "dma_device_type": 2 00:09:53.328 }, 00:09:53.328 { 00:09:53.328 "dma_device_id": "system", 00:09:53.328 "dma_device_type": 1 00:09:53.328 }, 00:09:53.328 { 00:09:53.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.328 "dma_device_type": 2 00:09:53.328 }, 00:09:53.328 { 00:09:53.328 "dma_device_id": "system", 00:09:53.328 "dma_device_type": 1 00:09:53.328 }, 00:09:53.328 { 00:09:53.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.328 "dma_device_type": 2 00:09:53.328 } 00:09:53.328 ], 00:09:53.328 "driver_specific": { 00:09:53.328 "raid": { 00:09:53.328 "uuid": "2e6c5e51-d41d-49ce-8fbf-cf956ccc0764", 00:09:53.328 "strip_size_kb": 64, 00:09:53.328 "state": "online", 00:09:53.328 "raid_level": "concat", 00:09:53.328 "superblock": true, 00:09:53.328 "num_base_bdevs": 3, 00:09:53.328 "num_base_bdevs_discovered": 3, 00:09:53.328 "num_base_bdevs_operational": 3, 00:09:53.328 "base_bdevs_list": [ 00:09:53.328 { 00:09:53.328 "name": "BaseBdev1", 00:09:53.328 "uuid": "a1f18dad-11eb-417b-8b37-83a044b8bb09", 00:09:53.328 "is_configured": true, 00:09:53.328 "data_offset": 2048, 00:09:53.328 "data_size": 63488 00:09:53.328 }, 00:09:53.328 { 00:09:53.328 "name": "BaseBdev2", 00:09:53.328 "uuid": "654a6528-6be9-40be-a79d-8bc92ef1cdfc", 00:09:53.328 "is_configured": true, 00:09:53.328 "data_offset": 2048, 00:09:53.328 "data_size": 63488 00:09:53.328 }, 00:09:53.328 { 00:09:53.328 "name": "BaseBdev3", 00:09:53.328 "uuid": "8fb0b7f7-7511-46fb-9cb3-15e974b34f40", 00:09:53.328 "is_configured": true, 00:09:53.328 "data_offset": 2048, 00:09:53.328 "data_size": 63488 00:09:53.328 } 00:09:53.328 ] 00:09:53.328 } 00:09:53.328 } 00:09:53.328 }' 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:53.328 BaseBdev2 00:09:53.328 BaseBdev3' 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.328 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:53.329 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.329 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.588 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.588 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.588 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.588 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.588 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:53.588 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.588 18:59:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.588 18:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.588 [2024-11-26 18:59:20.050863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:53.588 [2024-11-26 18:59:20.051026] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.588 [2024-11-26 18:59:20.051213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.588 "name": "Existed_Raid", 00:09:53.588 "uuid": "2e6c5e51-d41d-49ce-8fbf-cf956ccc0764", 00:09:53.588 "strip_size_kb": 64, 00:09:53.588 "state": "offline", 00:09:53.588 "raid_level": "concat", 00:09:53.588 "superblock": true, 00:09:53.588 "num_base_bdevs": 3, 00:09:53.588 "num_base_bdevs_discovered": 2, 00:09:53.588 "num_base_bdevs_operational": 2, 00:09:53.588 "base_bdevs_list": [ 00:09:53.588 { 00:09:53.588 "name": null, 00:09:53.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.588 "is_configured": false, 00:09:53.588 "data_offset": 0, 00:09:53.588 "data_size": 63488 00:09:53.588 }, 00:09:53.588 { 00:09:53.588 "name": "BaseBdev2", 00:09:53.588 "uuid": "654a6528-6be9-40be-a79d-8bc92ef1cdfc", 00:09:53.588 "is_configured": true, 00:09:53.588 "data_offset": 2048, 00:09:53.588 "data_size": 63488 00:09:53.588 }, 00:09:53.588 { 00:09:53.588 "name": "BaseBdev3", 00:09:53.588 "uuid": "8fb0b7f7-7511-46fb-9cb3-15e974b34f40", 00:09:53.588 "is_configured": true, 00:09:53.588 "data_offset": 2048, 00:09:53.588 "data_size": 63488 00:09:53.588 } 00:09:53.588 ] 00:09:53.588 }' 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.588 18:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.156 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:54.156 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:54.156 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:54.156 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.156 18:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.156 18:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.156 18:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.156 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:54.156 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:54.156 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:54.156 18:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.156 18:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.156 [2024-11-26 18:59:20.723911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:54.415 18:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.415 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:54.415 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:54.415 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.415 18:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.415 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:54.415 18:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.415 18:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.415 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:54.415 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:54.415 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:54.415 18:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.415 18:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.415 [2024-11-26 18:59:20.892844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:54.415 [2024-11-26 18:59:20.893054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:54.415 18:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.415 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:54.415 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:54.415 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.415 18:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.415 18:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:54.415 18:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.415 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.675 BaseBdev2 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.675 [ 00:09:54.675 { 00:09:54.675 "name": "BaseBdev2", 00:09:54.675 "aliases": [ 00:09:54.675 "f02dfe89-5df2-452c-b854-dcb3cb0cf633" 00:09:54.675 ], 00:09:54.675 "product_name": "Malloc disk", 00:09:54.675 "block_size": 512, 00:09:54.675 "num_blocks": 65536, 00:09:54.675 "uuid": "f02dfe89-5df2-452c-b854-dcb3cb0cf633", 00:09:54.675 "assigned_rate_limits": { 00:09:54.675 "rw_ios_per_sec": 0, 00:09:54.675 "rw_mbytes_per_sec": 0, 00:09:54.675 "r_mbytes_per_sec": 0, 00:09:54.675 "w_mbytes_per_sec": 0 00:09:54.675 }, 00:09:54.675 "claimed": false, 00:09:54.675 "zoned": false, 00:09:54.675 "supported_io_types": { 00:09:54.675 "read": true, 00:09:54.675 "write": true, 00:09:54.675 "unmap": true, 00:09:54.675 "flush": true, 00:09:54.675 "reset": true, 00:09:54.675 "nvme_admin": false, 00:09:54.675 "nvme_io": false, 00:09:54.675 "nvme_io_md": false, 00:09:54.675 "write_zeroes": true, 00:09:54.675 "zcopy": true, 00:09:54.675 "get_zone_info": false, 00:09:54.675 "zone_management": false, 00:09:54.675 "zone_append": false, 00:09:54.675 "compare": false, 00:09:54.675 "compare_and_write": false, 00:09:54.675 "abort": true, 00:09:54.675 "seek_hole": false, 00:09:54.675 "seek_data": false, 00:09:54.675 "copy": true, 00:09:54.675 "nvme_iov_md": false 00:09:54.675 }, 00:09:54.675 "memory_domains": [ 00:09:54.675 { 00:09:54.675 "dma_device_id": "system", 00:09:54.675 "dma_device_type": 1 00:09:54.675 }, 00:09:54.675 { 00:09:54.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.675 "dma_device_type": 2 00:09:54.675 } 00:09:54.675 ], 00:09:54.675 "driver_specific": {} 00:09:54.675 } 00:09:54.675 ] 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.675 BaseBdev3 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:54.675 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.676 [ 00:09:54.676 { 00:09:54.676 "name": "BaseBdev3", 00:09:54.676 "aliases": [ 00:09:54.676 "9373e31b-66ef-4cd6-bdb6-af5b5401b4e3" 00:09:54.676 ], 00:09:54.676 "product_name": "Malloc disk", 00:09:54.676 "block_size": 512, 00:09:54.676 "num_blocks": 65536, 00:09:54.676 "uuid": "9373e31b-66ef-4cd6-bdb6-af5b5401b4e3", 00:09:54.676 "assigned_rate_limits": { 00:09:54.676 "rw_ios_per_sec": 0, 00:09:54.676 "rw_mbytes_per_sec": 0, 00:09:54.676 "r_mbytes_per_sec": 0, 00:09:54.676 "w_mbytes_per_sec": 0 00:09:54.676 }, 00:09:54.676 "claimed": false, 00:09:54.676 "zoned": false, 00:09:54.676 "supported_io_types": { 00:09:54.676 "read": true, 00:09:54.676 "write": true, 00:09:54.676 "unmap": true, 00:09:54.676 "flush": true, 00:09:54.676 "reset": true, 00:09:54.676 "nvme_admin": false, 00:09:54.676 "nvme_io": false, 00:09:54.676 "nvme_io_md": false, 00:09:54.676 "write_zeroes": true, 00:09:54.676 "zcopy": true, 00:09:54.676 "get_zone_info": false, 00:09:54.676 "zone_management": false, 00:09:54.676 "zone_append": false, 00:09:54.676 "compare": false, 00:09:54.676 "compare_and_write": false, 00:09:54.676 "abort": true, 00:09:54.676 "seek_hole": false, 00:09:54.676 "seek_data": false, 00:09:54.676 "copy": true, 00:09:54.676 "nvme_iov_md": false 00:09:54.676 }, 00:09:54.676 "memory_domains": [ 00:09:54.676 { 00:09:54.676 "dma_device_id": "system", 00:09:54.676 "dma_device_type": 1 00:09:54.676 }, 00:09:54.676 { 00:09:54.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.676 "dma_device_type": 2 00:09:54.676 } 00:09:54.676 ], 00:09:54.676 "driver_specific": {} 00:09:54.676 } 00:09:54.676 ] 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.676 [2024-11-26 18:59:21.198119] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.676 [2024-11-26 18:59:21.198320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.676 [2024-11-26 18:59:21.198459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.676 [2024-11-26 18:59:21.201026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.676 "name": "Existed_Raid", 00:09:54.676 "uuid": "137f487f-fa92-4f4f-8810-ee36c8b95a55", 00:09:54.676 "strip_size_kb": 64, 00:09:54.676 "state": "configuring", 00:09:54.676 "raid_level": "concat", 00:09:54.676 "superblock": true, 00:09:54.676 "num_base_bdevs": 3, 00:09:54.676 "num_base_bdevs_discovered": 2, 00:09:54.676 "num_base_bdevs_operational": 3, 00:09:54.676 "base_bdevs_list": [ 00:09:54.676 { 00:09:54.676 "name": "BaseBdev1", 00:09:54.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.676 "is_configured": false, 00:09:54.676 "data_offset": 0, 00:09:54.676 "data_size": 0 00:09:54.676 }, 00:09:54.676 { 00:09:54.676 "name": "BaseBdev2", 00:09:54.676 "uuid": "f02dfe89-5df2-452c-b854-dcb3cb0cf633", 00:09:54.676 "is_configured": true, 00:09:54.676 "data_offset": 2048, 00:09:54.676 "data_size": 63488 00:09:54.676 }, 00:09:54.676 { 00:09:54.676 "name": "BaseBdev3", 00:09:54.676 "uuid": "9373e31b-66ef-4cd6-bdb6-af5b5401b4e3", 00:09:54.676 "is_configured": true, 00:09:54.676 "data_offset": 2048, 00:09:54.676 "data_size": 63488 00:09:54.676 } 00:09:54.676 ] 00:09:54.676 }' 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.676 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.244 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:55.244 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.244 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.244 [2024-11-26 18:59:21.734356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:55.244 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.244 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:55.244 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.244 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.244 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.244 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.244 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.244 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.244 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.244 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.244 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.244 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.244 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.244 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.244 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.244 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.244 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.244 "name": "Existed_Raid", 00:09:55.244 "uuid": "137f487f-fa92-4f4f-8810-ee36c8b95a55", 00:09:55.244 "strip_size_kb": 64, 00:09:55.244 "state": "configuring", 00:09:55.244 "raid_level": "concat", 00:09:55.244 "superblock": true, 00:09:55.244 "num_base_bdevs": 3, 00:09:55.244 "num_base_bdevs_discovered": 1, 00:09:55.244 "num_base_bdevs_operational": 3, 00:09:55.244 "base_bdevs_list": [ 00:09:55.244 { 00:09:55.244 "name": "BaseBdev1", 00:09:55.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.244 "is_configured": false, 00:09:55.244 "data_offset": 0, 00:09:55.244 "data_size": 0 00:09:55.244 }, 00:09:55.244 { 00:09:55.244 "name": null, 00:09:55.244 "uuid": "f02dfe89-5df2-452c-b854-dcb3cb0cf633", 00:09:55.244 "is_configured": false, 00:09:55.244 "data_offset": 0, 00:09:55.244 "data_size": 63488 00:09:55.244 }, 00:09:55.244 { 00:09:55.244 "name": "BaseBdev3", 00:09:55.244 "uuid": "9373e31b-66ef-4cd6-bdb6-af5b5401b4e3", 00:09:55.244 "is_configured": true, 00:09:55.244 "data_offset": 2048, 00:09:55.244 "data_size": 63488 00:09:55.244 } 00:09:55.244 ] 00:09:55.244 }' 00:09:55.244 18:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.244 18:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.815 [2024-11-26 18:59:22.365840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.815 BaseBdev1 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.815 [ 00:09:55.815 { 00:09:55.815 "name": "BaseBdev1", 00:09:55.815 "aliases": [ 00:09:55.815 "1dfe7214-3a6a-4a61-a121-17301e4020a7" 00:09:55.815 ], 00:09:55.815 "product_name": "Malloc disk", 00:09:55.815 "block_size": 512, 00:09:55.815 "num_blocks": 65536, 00:09:55.815 "uuid": "1dfe7214-3a6a-4a61-a121-17301e4020a7", 00:09:55.815 "assigned_rate_limits": { 00:09:55.815 "rw_ios_per_sec": 0, 00:09:55.815 "rw_mbytes_per_sec": 0, 00:09:55.815 "r_mbytes_per_sec": 0, 00:09:55.815 "w_mbytes_per_sec": 0 00:09:55.815 }, 00:09:55.815 "claimed": true, 00:09:55.815 "claim_type": "exclusive_write", 00:09:55.815 "zoned": false, 00:09:55.815 "supported_io_types": { 00:09:55.815 "read": true, 00:09:55.815 "write": true, 00:09:55.815 "unmap": true, 00:09:55.815 "flush": true, 00:09:55.815 "reset": true, 00:09:55.815 "nvme_admin": false, 00:09:55.815 "nvme_io": false, 00:09:55.815 "nvme_io_md": false, 00:09:55.815 "write_zeroes": true, 00:09:55.815 "zcopy": true, 00:09:55.815 "get_zone_info": false, 00:09:55.815 "zone_management": false, 00:09:55.815 "zone_append": false, 00:09:55.815 "compare": false, 00:09:55.815 "compare_and_write": false, 00:09:55.815 "abort": true, 00:09:55.815 "seek_hole": false, 00:09:55.815 "seek_data": false, 00:09:55.815 "copy": true, 00:09:55.815 "nvme_iov_md": false 00:09:55.815 }, 00:09:55.815 "memory_domains": [ 00:09:55.815 { 00:09:55.815 "dma_device_id": "system", 00:09:55.815 "dma_device_type": 1 00:09:55.815 }, 00:09:55.815 { 00:09:55.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.815 "dma_device_type": 2 00:09:55.815 } 00:09:55.815 ], 00:09:55.815 "driver_specific": {} 00:09:55.815 } 00:09:55.815 ] 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.815 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.075 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.075 "name": "Existed_Raid", 00:09:56.075 "uuid": "137f487f-fa92-4f4f-8810-ee36c8b95a55", 00:09:56.075 "strip_size_kb": 64, 00:09:56.075 "state": "configuring", 00:09:56.075 "raid_level": "concat", 00:09:56.075 "superblock": true, 00:09:56.075 "num_base_bdevs": 3, 00:09:56.075 "num_base_bdevs_discovered": 2, 00:09:56.075 "num_base_bdevs_operational": 3, 00:09:56.075 "base_bdevs_list": [ 00:09:56.075 { 00:09:56.075 "name": "BaseBdev1", 00:09:56.075 "uuid": "1dfe7214-3a6a-4a61-a121-17301e4020a7", 00:09:56.075 "is_configured": true, 00:09:56.075 "data_offset": 2048, 00:09:56.075 "data_size": 63488 00:09:56.075 }, 00:09:56.075 { 00:09:56.075 "name": null, 00:09:56.075 "uuid": "f02dfe89-5df2-452c-b854-dcb3cb0cf633", 00:09:56.075 "is_configured": false, 00:09:56.075 "data_offset": 0, 00:09:56.075 "data_size": 63488 00:09:56.075 }, 00:09:56.075 { 00:09:56.075 "name": "BaseBdev3", 00:09:56.075 "uuid": "9373e31b-66ef-4cd6-bdb6-af5b5401b4e3", 00:09:56.075 "is_configured": true, 00:09:56.075 "data_offset": 2048, 00:09:56.075 "data_size": 63488 00:09:56.075 } 00:09:56.075 ] 00:09:56.075 }' 00:09:56.075 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.075 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.350 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.350 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:56.350 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.350 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.350 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.618 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:56.618 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:56.619 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.619 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.619 [2024-11-26 18:59:22.986228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:56.619 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.619 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:56.619 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.619 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.619 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.619 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.619 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.619 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.619 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.619 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.619 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.619 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.619 18:59:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.619 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.619 18:59:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.619 18:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.619 18:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.619 "name": "Existed_Raid", 00:09:56.619 "uuid": "137f487f-fa92-4f4f-8810-ee36c8b95a55", 00:09:56.619 "strip_size_kb": 64, 00:09:56.619 "state": "configuring", 00:09:56.619 "raid_level": "concat", 00:09:56.619 "superblock": true, 00:09:56.619 "num_base_bdevs": 3, 00:09:56.619 "num_base_bdevs_discovered": 1, 00:09:56.619 "num_base_bdevs_operational": 3, 00:09:56.619 "base_bdevs_list": [ 00:09:56.619 { 00:09:56.619 "name": "BaseBdev1", 00:09:56.619 "uuid": "1dfe7214-3a6a-4a61-a121-17301e4020a7", 00:09:56.619 "is_configured": true, 00:09:56.619 "data_offset": 2048, 00:09:56.619 "data_size": 63488 00:09:56.619 }, 00:09:56.619 { 00:09:56.619 "name": null, 00:09:56.619 "uuid": "f02dfe89-5df2-452c-b854-dcb3cb0cf633", 00:09:56.619 "is_configured": false, 00:09:56.619 "data_offset": 0, 00:09:56.619 "data_size": 63488 00:09:56.619 }, 00:09:56.619 { 00:09:56.619 "name": null, 00:09:56.619 "uuid": "9373e31b-66ef-4cd6-bdb6-af5b5401b4e3", 00:09:56.619 "is_configured": false, 00:09:56.619 "data_offset": 0, 00:09:56.619 "data_size": 63488 00:09:56.619 } 00:09:56.619 ] 00:09:56.619 }' 00:09:56.619 18:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.619 18:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.878 18:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:56.878 18:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.878 18:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.878 18:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.878 18:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.137 [2024-11-26 18:59:23.506243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.137 "name": "Existed_Raid", 00:09:57.137 "uuid": "137f487f-fa92-4f4f-8810-ee36c8b95a55", 00:09:57.137 "strip_size_kb": 64, 00:09:57.137 "state": "configuring", 00:09:57.137 "raid_level": "concat", 00:09:57.137 "superblock": true, 00:09:57.137 "num_base_bdevs": 3, 00:09:57.137 "num_base_bdevs_discovered": 2, 00:09:57.137 "num_base_bdevs_operational": 3, 00:09:57.137 "base_bdevs_list": [ 00:09:57.137 { 00:09:57.137 "name": "BaseBdev1", 00:09:57.137 "uuid": "1dfe7214-3a6a-4a61-a121-17301e4020a7", 00:09:57.137 "is_configured": true, 00:09:57.137 "data_offset": 2048, 00:09:57.137 "data_size": 63488 00:09:57.137 }, 00:09:57.137 { 00:09:57.137 "name": null, 00:09:57.137 "uuid": "f02dfe89-5df2-452c-b854-dcb3cb0cf633", 00:09:57.137 "is_configured": false, 00:09:57.137 "data_offset": 0, 00:09:57.137 "data_size": 63488 00:09:57.137 }, 00:09:57.137 { 00:09:57.137 "name": "BaseBdev3", 00:09:57.137 "uuid": "9373e31b-66ef-4cd6-bdb6-af5b5401b4e3", 00:09:57.137 "is_configured": true, 00:09:57.137 "data_offset": 2048, 00:09:57.137 "data_size": 63488 00:09:57.137 } 00:09:57.137 ] 00:09:57.137 }' 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.137 18:59:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.703 [2024-11-26 18:59:24.114482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.703 "name": "Existed_Raid", 00:09:57.703 "uuid": "137f487f-fa92-4f4f-8810-ee36c8b95a55", 00:09:57.703 "strip_size_kb": 64, 00:09:57.703 "state": "configuring", 00:09:57.703 "raid_level": "concat", 00:09:57.703 "superblock": true, 00:09:57.703 "num_base_bdevs": 3, 00:09:57.703 "num_base_bdevs_discovered": 1, 00:09:57.703 "num_base_bdevs_operational": 3, 00:09:57.703 "base_bdevs_list": [ 00:09:57.703 { 00:09:57.703 "name": null, 00:09:57.703 "uuid": "1dfe7214-3a6a-4a61-a121-17301e4020a7", 00:09:57.703 "is_configured": false, 00:09:57.703 "data_offset": 0, 00:09:57.703 "data_size": 63488 00:09:57.703 }, 00:09:57.703 { 00:09:57.703 "name": null, 00:09:57.703 "uuid": "f02dfe89-5df2-452c-b854-dcb3cb0cf633", 00:09:57.703 "is_configured": false, 00:09:57.703 "data_offset": 0, 00:09:57.703 "data_size": 63488 00:09:57.703 }, 00:09:57.703 { 00:09:57.703 "name": "BaseBdev3", 00:09:57.703 "uuid": "9373e31b-66ef-4cd6-bdb6-af5b5401b4e3", 00:09:57.703 "is_configured": true, 00:09:57.703 "data_offset": 2048, 00:09:57.703 "data_size": 63488 00:09:57.703 } 00:09:57.703 ] 00:09:57.703 }' 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.703 18:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.269 [2024-11-26 18:59:24.813110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.269 "name": "Existed_Raid", 00:09:58.269 "uuid": "137f487f-fa92-4f4f-8810-ee36c8b95a55", 00:09:58.269 "strip_size_kb": 64, 00:09:58.269 "state": "configuring", 00:09:58.269 "raid_level": "concat", 00:09:58.269 "superblock": true, 00:09:58.269 "num_base_bdevs": 3, 00:09:58.269 "num_base_bdevs_discovered": 2, 00:09:58.269 "num_base_bdevs_operational": 3, 00:09:58.269 "base_bdevs_list": [ 00:09:58.269 { 00:09:58.269 "name": null, 00:09:58.269 "uuid": "1dfe7214-3a6a-4a61-a121-17301e4020a7", 00:09:58.269 "is_configured": false, 00:09:58.269 "data_offset": 0, 00:09:58.269 "data_size": 63488 00:09:58.269 }, 00:09:58.269 { 00:09:58.269 "name": "BaseBdev2", 00:09:58.269 "uuid": "f02dfe89-5df2-452c-b854-dcb3cb0cf633", 00:09:58.269 "is_configured": true, 00:09:58.269 "data_offset": 2048, 00:09:58.269 "data_size": 63488 00:09:58.269 }, 00:09:58.269 { 00:09:58.269 "name": "BaseBdev3", 00:09:58.269 "uuid": "9373e31b-66ef-4cd6-bdb6-af5b5401b4e3", 00:09:58.269 "is_configured": true, 00:09:58.269 "data_offset": 2048, 00:09:58.269 "data_size": 63488 00:09:58.269 } 00:09:58.269 ] 00:09:58.269 }' 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.269 18:59:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.835 18:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.835 18:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:58.835 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.835 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.835 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.835 18:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:58.835 18:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.835 18:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:58.835 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.835 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.835 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.835 18:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1dfe7214-3a6a-4a61-a121-17301e4020a7 00:09:58.835 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.835 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.093 [2024-11-26 18:59:25.463404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:59.093 NewBaseBdev 00:09:59.093 [2024-11-26 18:59:25.463927] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:59.093 [2024-11-26 18:59:25.463960] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:59.093 [2024-11-26 18:59:25.464307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:59.093 [2024-11-26 18:59:25.464509] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:59.093 [2024-11-26 18:59:25.464526] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:59.093 [2024-11-26 18:59:25.464704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.093 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.093 18:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:59.093 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:59.093 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.093 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:59.093 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.093 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.093 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.093 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.093 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.093 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.093 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:59.093 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.093 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.093 [ 00:09:59.093 { 00:09:59.093 "name": "NewBaseBdev", 00:09:59.093 "aliases": [ 00:09:59.093 "1dfe7214-3a6a-4a61-a121-17301e4020a7" 00:09:59.093 ], 00:09:59.093 "product_name": "Malloc disk", 00:09:59.093 "block_size": 512, 00:09:59.093 "num_blocks": 65536, 00:09:59.093 "uuid": "1dfe7214-3a6a-4a61-a121-17301e4020a7", 00:09:59.093 "assigned_rate_limits": { 00:09:59.093 "rw_ios_per_sec": 0, 00:09:59.093 "rw_mbytes_per_sec": 0, 00:09:59.093 "r_mbytes_per_sec": 0, 00:09:59.093 "w_mbytes_per_sec": 0 00:09:59.093 }, 00:09:59.093 "claimed": true, 00:09:59.093 "claim_type": "exclusive_write", 00:09:59.093 "zoned": false, 00:09:59.093 "supported_io_types": { 00:09:59.093 "read": true, 00:09:59.093 "write": true, 00:09:59.093 "unmap": true, 00:09:59.093 "flush": true, 00:09:59.093 "reset": true, 00:09:59.094 "nvme_admin": false, 00:09:59.094 "nvme_io": false, 00:09:59.094 "nvme_io_md": false, 00:09:59.094 "write_zeroes": true, 00:09:59.094 "zcopy": true, 00:09:59.094 "get_zone_info": false, 00:09:59.094 "zone_management": false, 00:09:59.094 "zone_append": false, 00:09:59.094 "compare": false, 00:09:59.094 "compare_and_write": false, 00:09:59.094 "abort": true, 00:09:59.094 "seek_hole": false, 00:09:59.094 "seek_data": false, 00:09:59.094 "copy": true, 00:09:59.094 "nvme_iov_md": false 00:09:59.094 }, 00:09:59.094 "memory_domains": [ 00:09:59.094 { 00:09:59.094 "dma_device_id": "system", 00:09:59.094 "dma_device_type": 1 00:09:59.094 }, 00:09:59.094 { 00:09:59.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.094 "dma_device_type": 2 00:09:59.094 } 00:09:59.094 ], 00:09:59.094 "driver_specific": {} 00:09:59.094 } 00:09:59.094 ] 00:09:59.094 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.094 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:59.094 18:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:59.094 18:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.094 18:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.094 18:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.094 18:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.094 18:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.094 18:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.094 18:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.094 18:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.094 18:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.094 18:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.094 18:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.094 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.094 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.094 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.094 18:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.094 "name": "Existed_Raid", 00:09:59.094 "uuid": "137f487f-fa92-4f4f-8810-ee36c8b95a55", 00:09:59.094 "strip_size_kb": 64, 00:09:59.094 "state": "online", 00:09:59.094 "raid_level": "concat", 00:09:59.094 "superblock": true, 00:09:59.094 "num_base_bdevs": 3, 00:09:59.094 "num_base_bdevs_discovered": 3, 00:09:59.094 "num_base_bdevs_operational": 3, 00:09:59.094 "base_bdevs_list": [ 00:09:59.094 { 00:09:59.094 "name": "NewBaseBdev", 00:09:59.094 "uuid": "1dfe7214-3a6a-4a61-a121-17301e4020a7", 00:09:59.094 "is_configured": true, 00:09:59.094 "data_offset": 2048, 00:09:59.094 "data_size": 63488 00:09:59.094 }, 00:09:59.094 { 00:09:59.094 "name": "BaseBdev2", 00:09:59.094 "uuid": "f02dfe89-5df2-452c-b854-dcb3cb0cf633", 00:09:59.094 "is_configured": true, 00:09:59.094 "data_offset": 2048, 00:09:59.094 "data_size": 63488 00:09:59.094 }, 00:09:59.094 { 00:09:59.094 "name": "BaseBdev3", 00:09:59.094 "uuid": "9373e31b-66ef-4cd6-bdb6-af5b5401b4e3", 00:09:59.094 "is_configured": true, 00:09:59.094 "data_offset": 2048, 00:09:59.094 "data_size": 63488 00:09:59.094 } 00:09:59.094 ] 00:09:59.094 }' 00:09:59.094 18:59:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.094 18:59:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.660 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:59.660 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:59.660 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:59.660 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:59.660 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:59.660 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:59.660 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:59.660 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:59.660 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.660 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.660 [2024-11-26 18:59:26.023953] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.661 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.661 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:59.661 "name": "Existed_Raid", 00:09:59.661 "aliases": [ 00:09:59.661 "137f487f-fa92-4f4f-8810-ee36c8b95a55" 00:09:59.661 ], 00:09:59.661 "product_name": "Raid Volume", 00:09:59.661 "block_size": 512, 00:09:59.661 "num_blocks": 190464, 00:09:59.661 "uuid": "137f487f-fa92-4f4f-8810-ee36c8b95a55", 00:09:59.661 "assigned_rate_limits": { 00:09:59.661 "rw_ios_per_sec": 0, 00:09:59.661 "rw_mbytes_per_sec": 0, 00:09:59.661 "r_mbytes_per_sec": 0, 00:09:59.661 "w_mbytes_per_sec": 0 00:09:59.661 }, 00:09:59.661 "claimed": false, 00:09:59.661 "zoned": false, 00:09:59.661 "supported_io_types": { 00:09:59.661 "read": true, 00:09:59.661 "write": true, 00:09:59.661 "unmap": true, 00:09:59.661 "flush": true, 00:09:59.661 "reset": true, 00:09:59.661 "nvme_admin": false, 00:09:59.661 "nvme_io": false, 00:09:59.661 "nvme_io_md": false, 00:09:59.661 "write_zeroes": true, 00:09:59.661 "zcopy": false, 00:09:59.661 "get_zone_info": false, 00:09:59.661 "zone_management": false, 00:09:59.661 "zone_append": false, 00:09:59.661 "compare": false, 00:09:59.661 "compare_and_write": false, 00:09:59.661 "abort": false, 00:09:59.661 "seek_hole": false, 00:09:59.661 "seek_data": false, 00:09:59.661 "copy": false, 00:09:59.661 "nvme_iov_md": false 00:09:59.661 }, 00:09:59.661 "memory_domains": [ 00:09:59.661 { 00:09:59.661 "dma_device_id": "system", 00:09:59.661 "dma_device_type": 1 00:09:59.661 }, 00:09:59.661 { 00:09:59.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.661 "dma_device_type": 2 00:09:59.661 }, 00:09:59.661 { 00:09:59.661 "dma_device_id": "system", 00:09:59.661 "dma_device_type": 1 00:09:59.661 }, 00:09:59.661 { 00:09:59.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.661 "dma_device_type": 2 00:09:59.661 }, 00:09:59.661 { 00:09:59.661 "dma_device_id": "system", 00:09:59.661 "dma_device_type": 1 00:09:59.661 }, 00:09:59.661 { 00:09:59.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.661 "dma_device_type": 2 00:09:59.661 } 00:09:59.661 ], 00:09:59.661 "driver_specific": { 00:09:59.661 "raid": { 00:09:59.661 "uuid": "137f487f-fa92-4f4f-8810-ee36c8b95a55", 00:09:59.661 "strip_size_kb": 64, 00:09:59.661 "state": "online", 00:09:59.661 "raid_level": "concat", 00:09:59.661 "superblock": true, 00:09:59.661 "num_base_bdevs": 3, 00:09:59.661 "num_base_bdevs_discovered": 3, 00:09:59.661 "num_base_bdevs_operational": 3, 00:09:59.661 "base_bdevs_list": [ 00:09:59.661 { 00:09:59.661 "name": "NewBaseBdev", 00:09:59.661 "uuid": "1dfe7214-3a6a-4a61-a121-17301e4020a7", 00:09:59.661 "is_configured": true, 00:09:59.661 "data_offset": 2048, 00:09:59.661 "data_size": 63488 00:09:59.661 }, 00:09:59.661 { 00:09:59.661 "name": "BaseBdev2", 00:09:59.661 "uuid": "f02dfe89-5df2-452c-b854-dcb3cb0cf633", 00:09:59.661 "is_configured": true, 00:09:59.661 "data_offset": 2048, 00:09:59.661 "data_size": 63488 00:09:59.661 }, 00:09:59.661 { 00:09:59.661 "name": "BaseBdev3", 00:09:59.661 "uuid": "9373e31b-66ef-4cd6-bdb6-af5b5401b4e3", 00:09:59.661 "is_configured": true, 00:09:59.661 "data_offset": 2048, 00:09:59.661 "data_size": 63488 00:09:59.661 } 00:09:59.661 ] 00:09:59.661 } 00:09:59.661 } 00:09:59.661 }' 00:09:59.661 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.661 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:59.661 BaseBdev2 00:09:59.661 BaseBdev3' 00:09:59.661 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.661 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:59.661 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.661 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:59.661 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.661 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.661 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.661 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.661 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.661 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.661 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.661 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:59.661 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.661 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.661 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.661 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.919 [2024-11-26 18:59:26.347641] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.919 [2024-11-26 18:59:26.347803] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.919 [2024-11-26 18:59:26.347938] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.919 [2024-11-26 18:59:26.348022] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.919 [2024-11-26 18:59:26.348044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66611 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66611 ']' 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66611 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66611 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.919 killing process with pid 66611 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66611' 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66611 00:09:59.919 [2024-11-26 18:59:26.387256] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:59.919 18:59:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66611 00:10:00.178 [2024-11-26 18:59:26.674613] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.555 18:59:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:01.555 00:10:01.555 real 0m12.042s 00:10:01.555 user 0m19.874s 00:10:01.555 sys 0m1.631s 00:10:01.555 18:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.555 ************************************ 00:10:01.555 END TEST raid_state_function_test_sb 00:10:01.555 ************************************ 00:10:01.555 18:59:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.555 18:59:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:10:01.555 18:59:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:01.555 18:59:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.555 18:59:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.555 ************************************ 00:10:01.555 START TEST raid_superblock_test 00:10:01.555 ************************************ 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67248 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67248 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67248 ']' 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.555 18:59:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.555 [2024-11-26 18:59:27.963243] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:10:01.555 [2024-11-26 18:59:27.963429] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67248 ] 00:10:01.555 [2024-11-26 18:59:28.142455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.813 [2024-11-26 18:59:28.291418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.108 [2024-11-26 18:59:28.518468] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.108 [2024-11-26 18:59:28.518509] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.694 malloc1 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.694 [2024-11-26 18:59:29.092144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:02.694 [2024-11-26 18:59:29.092380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.694 [2024-11-26 18:59:29.092461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:02.694 [2024-11-26 18:59:29.092654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.694 [2024-11-26 18:59:29.095645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.694 [2024-11-26 18:59:29.095812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:02.694 pt1 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.694 malloc2 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.694 [2024-11-26 18:59:29.152598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:02.694 [2024-11-26 18:59:29.152798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.694 [2024-11-26 18:59:29.152882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:02.694 [2024-11-26 18:59:29.153046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.694 [2024-11-26 18:59:29.156035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.694 [2024-11-26 18:59:29.156082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:02.694 pt2 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.694 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.695 malloc3 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.695 [2024-11-26 18:59:29.222450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:02.695 [2024-11-26 18:59:29.222523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.695 [2024-11-26 18:59:29.222558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:02.695 [2024-11-26 18:59:29.222575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.695 [2024-11-26 18:59:29.225461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.695 [2024-11-26 18:59:29.225509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:02.695 pt3 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.695 [2024-11-26 18:59:29.230516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:02.695 [2024-11-26 18:59:29.233047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:02.695 [2024-11-26 18:59:29.233334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:02.695 [2024-11-26 18:59:29.233566] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:02.695 [2024-11-26 18:59:29.233591] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:02.695 [2024-11-26 18:59:29.233899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:02.695 [2024-11-26 18:59:29.234112] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:02.695 [2024-11-26 18:59:29.234128] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:02.695 [2024-11-26 18:59:29.234341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.695 "name": "raid_bdev1", 00:10:02.695 "uuid": "74bbeeea-1ed2-49da-998b-63aeb28b4d58", 00:10:02.695 "strip_size_kb": 64, 00:10:02.695 "state": "online", 00:10:02.695 "raid_level": "concat", 00:10:02.695 "superblock": true, 00:10:02.695 "num_base_bdevs": 3, 00:10:02.695 "num_base_bdevs_discovered": 3, 00:10:02.695 "num_base_bdevs_operational": 3, 00:10:02.695 "base_bdevs_list": [ 00:10:02.695 { 00:10:02.695 "name": "pt1", 00:10:02.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:02.695 "is_configured": true, 00:10:02.695 "data_offset": 2048, 00:10:02.695 "data_size": 63488 00:10:02.695 }, 00:10:02.695 { 00:10:02.695 "name": "pt2", 00:10:02.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.695 "is_configured": true, 00:10:02.695 "data_offset": 2048, 00:10:02.695 "data_size": 63488 00:10:02.695 }, 00:10:02.695 { 00:10:02.695 "name": "pt3", 00:10:02.695 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:02.695 "is_configured": true, 00:10:02.695 "data_offset": 2048, 00:10:02.695 "data_size": 63488 00:10:02.695 } 00:10:02.695 ] 00:10:02.695 }' 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.695 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.262 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:03.262 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:03.262 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:03.262 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:03.262 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:03.262 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:03.262 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:03.262 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.262 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:03.262 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.262 [2024-11-26 18:59:29.775062] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.262 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.262 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:03.262 "name": "raid_bdev1", 00:10:03.262 "aliases": [ 00:10:03.262 "74bbeeea-1ed2-49da-998b-63aeb28b4d58" 00:10:03.262 ], 00:10:03.262 "product_name": "Raid Volume", 00:10:03.262 "block_size": 512, 00:10:03.262 "num_blocks": 190464, 00:10:03.262 "uuid": "74bbeeea-1ed2-49da-998b-63aeb28b4d58", 00:10:03.262 "assigned_rate_limits": { 00:10:03.262 "rw_ios_per_sec": 0, 00:10:03.262 "rw_mbytes_per_sec": 0, 00:10:03.262 "r_mbytes_per_sec": 0, 00:10:03.262 "w_mbytes_per_sec": 0 00:10:03.262 }, 00:10:03.262 "claimed": false, 00:10:03.262 "zoned": false, 00:10:03.262 "supported_io_types": { 00:10:03.262 "read": true, 00:10:03.262 "write": true, 00:10:03.262 "unmap": true, 00:10:03.262 "flush": true, 00:10:03.262 "reset": true, 00:10:03.262 "nvme_admin": false, 00:10:03.262 "nvme_io": false, 00:10:03.262 "nvme_io_md": false, 00:10:03.262 "write_zeroes": true, 00:10:03.262 "zcopy": false, 00:10:03.262 "get_zone_info": false, 00:10:03.262 "zone_management": false, 00:10:03.262 "zone_append": false, 00:10:03.262 "compare": false, 00:10:03.262 "compare_and_write": false, 00:10:03.262 "abort": false, 00:10:03.262 "seek_hole": false, 00:10:03.262 "seek_data": false, 00:10:03.262 "copy": false, 00:10:03.262 "nvme_iov_md": false 00:10:03.262 }, 00:10:03.262 "memory_domains": [ 00:10:03.262 { 00:10:03.262 "dma_device_id": "system", 00:10:03.262 "dma_device_type": 1 00:10:03.262 }, 00:10:03.262 { 00:10:03.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.262 "dma_device_type": 2 00:10:03.262 }, 00:10:03.262 { 00:10:03.262 "dma_device_id": "system", 00:10:03.262 "dma_device_type": 1 00:10:03.262 }, 00:10:03.262 { 00:10:03.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.262 "dma_device_type": 2 00:10:03.262 }, 00:10:03.262 { 00:10:03.262 "dma_device_id": "system", 00:10:03.262 "dma_device_type": 1 00:10:03.262 }, 00:10:03.262 { 00:10:03.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.262 "dma_device_type": 2 00:10:03.262 } 00:10:03.262 ], 00:10:03.262 "driver_specific": { 00:10:03.262 "raid": { 00:10:03.262 "uuid": "74bbeeea-1ed2-49da-998b-63aeb28b4d58", 00:10:03.262 "strip_size_kb": 64, 00:10:03.262 "state": "online", 00:10:03.262 "raid_level": "concat", 00:10:03.262 "superblock": true, 00:10:03.262 "num_base_bdevs": 3, 00:10:03.262 "num_base_bdevs_discovered": 3, 00:10:03.262 "num_base_bdevs_operational": 3, 00:10:03.262 "base_bdevs_list": [ 00:10:03.262 { 00:10:03.262 "name": "pt1", 00:10:03.262 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:03.262 "is_configured": true, 00:10:03.262 "data_offset": 2048, 00:10:03.262 "data_size": 63488 00:10:03.262 }, 00:10:03.262 { 00:10:03.262 "name": "pt2", 00:10:03.262 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:03.262 "is_configured": true, 00:10:03.262 "data_offset": 2048, 00:10:03.262 "data_size": 63488 00:10:03.262 }, 00:10:03.262 { 00:10:03.262 "name": "pt3", 00:10:03.262 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:03.262 "is_configured": true, 00:10:03.262 "data_offset": 2048, 00:10:03.262 "data_size": 63488 00:10:03.262 } 00:10:03.262 ] 00:10:03.262 } 00:10:03.262 } 00:10:03.262 }' 00:10:03.262 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:03.262 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:03.262 pt2 00:10:03.262 pt3' 00:10:03.262 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.520 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:03.520 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.520 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:03.520 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.520 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.520 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.520 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.520 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.520 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.520 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.520 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:03.520 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.520 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.520 18:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.520 18:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:03.520 [2024-11-26 18:59:30.075003] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=74bbeeea-1ed2-49da-998b-63aeb28b4d58 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 74bbeeea-1ed2-49da-998b-63aeb28b4d58 ']' 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.520 [2024-11-26 18:59:30.126668] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:03.520 [2024-11-26 18:59:30.126833] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:03.520 [2024-11-26 18:59:30.127035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:03.520 [2024-11-26 18:59:30.127262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:03.520 [2024-11-26 18:59:30.127410] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.520 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.779 [2024-11-26 18:59:30.262776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:03.779 [2024-11-26 18:59:30.265554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:03.779 [2024-11-26 18:59:30.265628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:03.779 [2024-11-26 18:59:30.265707] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:03.779 [2024-11-26 18:59:30.265783] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:03.779 [2024-11-26 18:59:30.265817] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:03.779 [2024-11-26 18:59:30.265844] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:03.779 [2024-11-26 18:59:30.265857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:03.779 request: 00:10:03.779 { 00:10:03.779 "name": "raid_bdev1", 00:10:03.779 "raid_level": "concat", 00:10:03.779 "base_bdevs": [ 00:10:03.779 "malloc1", 00:10:03.779 "malloc2", 00:10:03.779 "malloc3" 00:10:03.779 ], 00:10:03.779 "strip_size_kb": 64, 00:10:03.779 "superblock": false, 00:10:03.779 "method": "bdev_raid_create", 00:10:03.779 "req_id": 1 00:10:03.779 } 00:10:03.779 Got JSON-RPC error response 00:10:03.779 response: 00:10:03.779 { 00:10:03.779 "code": -17, 00:10:03.779 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:03.779 } 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:03.779 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:03.780 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.780 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.780 [2024-11-26 18:59:30.314725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:03.780 [2024-11-26 18:59:30.314902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.780 [2024-11-26 18:59:30.314977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:03.780 [2024-11-26 18:59:30.315081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.780 [2024-11-26 18:59:30.318111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.780 [2024-11-26 18:59:30.318160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:03.780 [2024-11-26 18:59:30.318267] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:03.780 [2024-11-26 18:59:30.318350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:03.780 pt1 00:10:03.780 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.780 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:03.780 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:03.780 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.780 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.780 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.780 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.780 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.780 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.780 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.780 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.780 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.780 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.780 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.780 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:03.780 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.780 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.780 "name": "raid_bdev1", 00:10:03.780 "uuid": "74bbeeea-1ed2-49da-998b-63aeb28b4d58", 00:10:03.780 "strip_size_kb": 64, 00:10:03.780 "state": "configuring", 00:10:03.780 "raid_level": "concat", 00:10:03.780 "superblock": true, 00:10:03.780 "num_base_bdevs": 3, 00:10:03.780 "num_base_bdevs_discovered": 1, 00:10:03.780 "num_base_bdevs_operational": 3, 00:10:03.780 "base_bdevs_list": [ 00:10:03.780 { 00:10:03.780 "name": "pt1", 00:10:03.780 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:03.780 "is_configured": true, 00:10:03.780 "data_offset": 2048, 00:10:03.780 "data_size": 63488 00:10:03.780 }, 00:10:03.780 { 00:10:03.780 "name": null, 00:10:03.780 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:03.780 "is_configured": false, 00:10:03.780 "data_offset": 2048, 00:10:03.780 "data_size": 63488 00:10:03.780 }, 00:10:03.780 { 00:10:03.780 "name": null, 00:10:03.780 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:03.780 "is_configured": false, 00:10:03.780 "data_offset": 2048, 00:10:03.780 "data_size": 63488 00:10:03.780 } 00:10:03.780 ] 00:10:03.780 }' 00:10:03.780 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.780 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.346 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:04.346 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:04.346 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.346 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.346 [2024-11-26 18:59:30.806908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:04.346 [2024-11-26 18:59:30.807129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.346 [2024-11-26 18:59:30.807216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:04.346 [2024-11-26 18:59:30.807239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.346 [2024-11-26 18:59:30.807888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.346 [2024-11-26 18:59:30.807915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:04.346 [2024-11-26 18:59:30.808038] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:04.346 [2024-11-26 18:59:30.808083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:04.346 pt2 00:10:04.346 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.346 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:04.346 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.346 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.346 [2024-11-26 18:59:30.814882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:04.346 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.346 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:04.346 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.346 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.346 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:04.346 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.346 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.346 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.347 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.347 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.347 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.347 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.347 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.347 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.347 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.347 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.347 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.347 "name": "raid_bdev1", 00:10:04.347 "uuid": "74bbeeea-1ed2-49da-998b-63aeb28b4d58", 00:10:04.347 "strip_size_kb": 64, 00:10:04.347 "state": "configuring", 00:10:04.347 "raid_level": "concat", 00:10:04.347 "superblock": true, 00:10:04.347 "num_base_bdevs": 3, 00:10:04.347 "num_base_bdevs_discovered": 1, 00:10:04.347 "num_base_bdevs_operational": 3, 00:10:04.347 "base_bdevs_list": [ 00:10:04.347 { 00:10:04.347 "name": "pt1", 00:10:04.347 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:04.347 "is_configured": true, 00:10:04.347 "data_offset": 2048, 00:10:04.347 "data_size": 63488 00:10:04.347 }, 00:10:04.347 { 00:10:04.347 "name": null, 00:10:04.347 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:04.347 "is_configured": false, 00:10:04.347 "data_offset": 0, 00:10:04.347 "data_size": 63488 00:10:04.347 }, 00:10:04.347 { 00:10:04.347 "name": null, 00:10:04.347 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:04.347 "is_configured": false, 00:10:04.347 "data_offset": 2048, 00:10:04.347 "data_size": 63488 00:10:04.347 } 00:10:04.347 ] 00:10:04.347 }' 00:10:04.347 18:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.347 18:59:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.914 [2024-11-26 18:59:31.319012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:04.914 [2024-11-26 18:59:31.319243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.914 [2024-11-26 18:59:31.319346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:04.914 [2024-11-26 18:59:31.319517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.914 [2024-11-26 18:59:31.320181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.914 [2024-11-26 18:59:31.320213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:04.914 [2024-11-26 18:59:31.320348] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:04.914 [2024-11-26 18:59:31.320389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:04.914 pt2 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.914 [2024-11-26 18:59:31.326970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:04.914 [2024-11-26 18:59:31.327156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.914 [2024-11-26 18:59:31.327220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:04.914 [2024-11-26 18:59:31.327470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.914 [2024-11-26 18:59:31.327978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.914 [2024-11-26 18:59:31.328135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:04.914 [2024-11-26 18:59:31.328339] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:04.914 [2024-11-26 18:59:31.328509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:04.914 [2024-11-26 18:59:31.328712] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:04.914 [2024-11-26 18:59:31.328835] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:04.914 [2024-11-26 18:59:31.329319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:04.914 [2024-11-26 18:59:31.329642] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:04.914 [2024-11-26 18:59:31.329761] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raidpt3 00:10:04.914 _bdev 0x617000007e80 00:10:04.914 [2024-11-26 18:59:31.330040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.914 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.914 "name": "raid_bdev1", 00:10:04.914 "uuid": "74bbeeea-1ed2-49da-998b-63aeb28b4d58", 00:10:04.914 "strip_size_kb": 64, 00:10:04.914 "state": "online", 00:10:04.914 "raid_level": "concat", 00:10:04.914 "superblock": true, 00:10:04.914 "num_base_bdevs": 3, 00:10:04.914 "num_base_bdevs_discovered": 3, 00:10:04.914 "num_base_bdevs_operational": 3, 00:10:04.914 "base_bdevs_list": [ 00:10:04.914 { 00:10:04.914 "name": "pt1", 00:10:04.914 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:04.914 "is_configured": true, 00:10:04.914 "data_offset": 2048, 00:10:04.914 "data_size": 63488 00:10:04.914 }, 00:10:04.914 { 00:10:04.914 "name": "pt2", 00:10:04.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:04.914 "is_configured": true, 00:10:04.914 "data_offset": 2048, 00:10:04.914 "data_size": 63488 00:10:04.914 }, 00:10:04.914 { 00:10:04.914 "name": "pt3", 00:10:04.914 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:04.914 "is_configured": true, 00:10:04.915 "data_offset": 2048, 00:10:04.915 "data_size": 63488 00:10:04.915 } 00:10:04.915 ] 00:10:04.915 }' 00:10:04.915 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.915 18:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.481 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:05.481 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:05.481 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:05.481 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:05.481 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:05.481 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:05.481 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:05.481 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:05.481 18:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.481 18:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.481 [2024-11-26 18:59:31.875589] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.481 18:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.481 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:05.481 "name": "raid_bdev1", 00:10:05.481 "aliases": [ 00:10:05.481 "74bbeeea-1ed2-49da-998b-63aeb28b4d58" 00:10:05.481 ], 00:10:05.481 "product_name": "Raid Volume", 00:10:05.481 "block_size": 512, 00:10:05.481 "num_blocks": 190464, 00:10:05.481 "uuid": "74bbeeea-1ed2-49da-998b-63aeb28b4d58", 00:10:05.481 "assigned_rate_limits": { 00:10:05.481 "rw_ios_per_sec": 0, 00:10:05.481 "rw_mbytes_per_sec": 0, 00:10:05.481 "r_mbytes_per_sec": 0, 00:10:05.481 "w_mbytes_per_sec": 0 00:10:05.481 }, 00:10:05.481 "claimed": false, 00:10:05.481 "zoned": false, 00:10:05.481 "supported_io_types": { 00:10:05.481 "read": true, 00:10:05.481 "write": true, 00:10:05.481 "unmap": true, 00:10:05.481 "flush": true, 00:10:05.481 "reset": true, 00:10:05.481 "nvme_admin": false, 00:10:05.481 "nvme_io": false, 00:10:05.481 "nvme_io_md": false, 00:10:05.481 "write_zeroes": true, 00:10:05.481 "zcopy": false, 00:10:05.481 "get_zone_info": false, 00:10:05.481 "zone_management": false, 00:10:05.481 "zone_append": false, 00:10:05.481 "compare": false, 00:10:05.481 "compare_and_write": false, 00:10:05.481 "abort": false, 00:10:05.481 "seek_hole": false, 00:10:05.481 "seek_data": false, 00:10:05.481 "copy": false, 00:10:05.481 "nvme_iov_md": false 00:10:05.481 }, 00:10:05.481 "memory_domains": [ 00:10:05.481 { 00:10:05.481 "dma_device_id": "system", 00:10:05.481 "dma_device_type": 1 00:10:05.481 }, 00:10:05.481 { 00:10:05.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.481 "dma_device_type": 2 00:10:05.481 }, 00:10:05.481 { 00:10:05.481 "dma_device_id": "system", 00:10:05.481 "dma_device_type": 1 00:10:05.481 }, 00:10:05.481 { 00:10:05.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.481 "dma_device_type": 2 00:10:05.481 }, 00:10:05.481 { 00:10:05.481 "dma_device_id": "system", 00:10:05.481 "dma_device_type": 1 00:10:05.481 }, 00:10:05.481 { 00:10:05.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.481 "dma_device_type": 2 00:10:05.481 } 00:10:05.481 ], 00:10:05.481 "driver_specific": { 00:10:05.481 "raid": { 00:10:05.481 "uuid": "74bbeeea-1ed2-49da-998b-63aeb28b4d58", 00:10:05.481 "strip_size_kb": 64, 00:10:05.481 "state": "online", 00:10:05.481 "raid_level": "concat", 00:10:05.481 "superblock": true, 00:10:05.481 "num_base_bdevs": 3, 00:10:05.481 "num_base_bdevs_discovered": 3, 00:10:05.481 "num_base_bdevs_operational": 3, 00:10:05.481 "base_bdevs_list": [ 00:10:05.481 { 00:10:05.481 "name": "pt1", 00:10:05.481 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:05.481 "is_configured": true, 00:10:05.481 "data_offset": 2048, 00:10:05.481 "data_size": 63488 00:10:05.481 }, 00:10:05.481 { 00:10:05.481 "name": "pt2", 00:10:05.481 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:05.481 "is_configured": true, 00:10:05.481 "data_offset": 2048, 00:10:05.481 "data_size": 63488 00:10:05.481 }, 00:10:05.481 { 00:10:05.481 "name": "pt3", 00:10:05.481 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:05.481 "is_configured": true, 00:10:05.481 "data_offset": 2048, 00:10:05.481 "data_size": 63488 00:10:05.481 } 00:10:05.481 ] 00:10:05.481 } 00:10:05.481 } 00:10:05.481 }' 00:10:05.481 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:05.481 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:05.481 pt2 00:10:05.481 pt3' 00:10:05.481 18:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.481 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:05.481 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.481 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:05.481 18:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.481 18:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.481 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.481 18:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.481 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.481 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.481 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.481 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:05.481 18:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.481 18:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.481 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.481 18:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:05.740 [2024-11-26 18:59:32.167556] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 74bbeeea-1ed2-49da-998b-63aeb28b4d58 '!=' 74bbeeea-1ed2-49da-998b-63aeb28b4d58 ']' 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67248 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67248 ']' 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67248 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67248 00:10:05.740 killing process with pid 67248 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67248' 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67248 00:10:05.740 [2024-11-26 18:59:32.245831] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:05.740 18:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67248 00:10:05.740 [2024-11-26 18:59:32.245958] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.740 [2024-11-26 18:59:32.246046] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:05.740 [2024-11-26 18:59:32.246066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:05.997 [2024-11-26 18:59:32.535132] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:07.370 18:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:07.370 00:10:07.370 real 0m5.825s 00:10:07.370 user 0m8.699s 00:10:07.370 sys 0m0.837s 00:10:07.370 ************************************ 00:10:07.370 END TEST raid_superblock_test 00:10:07.370 ************************************ 00:10:07.370 18:59:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.370 18:59:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.370 18:59:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:10:07.370 18:59:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:07.370 18:59:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.370 18:59:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:07.370 ************************************ 00:10:07.370 START TEST raid_read_error_test 00:10:07.370 ************************************ 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mVFt4EDQiS 00:10:07.370 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67507 00:10:07.371 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67507 00:10:07.371 18:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67507 ']' 00:10:07.371 18:59:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:07.371 18:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.371 18:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.371 18:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.371 18:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.371 18:59:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.371 [2024-11-26 18:59:33.864705] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:10:07.371 [2024-11-26 18:59:33.865192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67507 ] 00:10:07.629 [2024-11-26 18:59:34.052472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.629 [2024-11-26 18:59:34.201929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.887 [2024-11-26 18:59:34.451063] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.887 [2024-11-26 18:59:34.451154] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.453 BaseBdev1_malloc 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.453 true 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.453 [2024-11-26 18:59:34.925826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:08.453 [2024-11-26 18:59:34.926067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.453 [2024-11-26 18:59:34.926154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:08.453 [2024-11-26 18:59:34.926278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.453 [2024-11-26 18:59:34.929364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.453 [2024-11-26 18:59:34.929421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:08.453 BaseBdev1 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.453 BaseBdev2_malloc 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.453 true 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.453 [2024-11-26 18:59:34.993336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:08.453 [2024-11-26 18:59:34.993567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.453 [2024-11-26 18:59:34.993638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:08.453 [2024-11-26 18:59:34.993759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.453 [2024-11-26 18:59:34.996793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.453 [2024-11-26 18:59:34.996955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:08.453 BaseBdev2 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.453 18:59:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.453 BaseBdev3_malloc 00:10:08.453 18:59:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.453 18:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:08.453 18:59:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.453 18:59:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.453 true 00:10:08.453 18:59:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.453 18:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:08.453 18:59:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.453 18:59:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.453 [2024-11-26 18:59:35.074075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:08.711 [2024-11-26 18:59:35.074273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.711 [2024-11-26 18:59:35.074324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:08.711 [2024-11-26 18:59:35.074345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.711 [2024-11-26 18:59:35.077429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.711 [2024-11-26 18:59:35.077509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:08.711 BaseBdev3 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.711 [2024-11-26 18:59:35.086201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.711 [2024-11-26 18:59:35.088987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.711 [2024-11-26 18:59:35.089225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.711 [2024-11-26 18:59:35.089578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:08.711 [2024-11-26 18:59:35.089708] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:08.711 [2024-11-26 18:59:35.090071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:08.711 [2024-11-26 18:59:35.090376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:08.711 [2024-11-26 18:59:35.090506] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:08.711 [2024-11-26 18:59:35.090875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.711 "name": "raid_bdev1", 00:10:08.711 "uuid": "31234602-e85a-48b4-b557-83c66f2ea0b0", 00:10:08.711 "strip_size_kb": 64, 00:10:08.711 "state": "online", 00:10:08.711 "raid_level": "concat", 00:10:08.711 "superblock": true, 00:10:08.711 "num_base_bdevs": 3, 00:10:08.711 "num_base_bdevs_discovered": 3, 00:10:08.711 "num_base_bdevs_operational": 3, 00:10:08.711 "base_bdevs_list": [ 00:10:08.711 { 00:10:08.711 "name": "BaseBdev1", 00:10:08.711 "uuid": "a988b33a-b58e-573b-8919-017cc3c7e083", 00:10:08.711 "is_configured": true, 00:10:08.711 "data_offset": 2048, 00:10:08.711 "data_size": 63488 00:10:08.711 }, 00:10:08.711 { 00:10:08.711 "name": "BaseBdev2", 00:10:08.711 "uuid": "112dcad8-d94d-529a-9815-8ad85849482d", 00:10:08.711 "is_configured": true, 00:10:08.711 "data_offset": 2048, 00:10:08.711 "data_size": 63488 00:10:08.711 }, 00:10:08.711 { 00:10:08.711 "name": "BaseBdev3", 00:10:08.711 "uuid": "d1d671a7-e1c3-5583-bff9-8d1d757d5690", 00:10:08.711 "is_configured": true, 00:10:08.711 "data_offset": 2048, 00:10:08.711 "data_size": 63488 00:10:08.711 } 00:10:08.711 ] 00:10:08.711 }' 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.711 18:59:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.277 18:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:09.277 18:59:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:09.277 [2024-11-26 18:59:35.756605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:10.210 18:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:10.210 18:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.210 18:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.210 18:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.210 18:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:10.210 18:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:10.210 18:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:10.210 18:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:10.210 18:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.210 18:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.210 18:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:10.210 18:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.210 18:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.210 18:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.210 18:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.210 18:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.210 18:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.210 18:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.210 18:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.210 18:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.210 18:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.210 18:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.211 18:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.211 "name": "raid_bdev1", 00:10:10.211 "uuid": "31234602-e85a-48b4-b557-83c66f2ea0b0", 00:10:10.211 "strip_size_kb": 64, 00:10:10.211 "state": "online", 00:10:10.211 "raid_level": "concat", 00:10:10.211 "superblock": true, 00:10:10.211 "num_base_bdevs": 3, 00:10:10.211 "num_base_bdevs_discovered": 3, 00:10:10.211 "num_base_bdevs_operational": 3, 00:10:10.211 "base_bdevs_list": [ 00:10:10.211 { 00:10:10.211 "name": "BaseBdev1", 00:10:10.211 "uuid": "a988b33a-b58e-573b-8919-017cc3c7e083", 00:10:10.211 "is_configured": true, 00:10:10.211 "data_offset": 2048, 00:10:10.211 "data_size": 63488 00:10:10.211 }, 00:10:10.211 { 00:10:10.211 "name": "BaseBdev2", 00:10:10.211 "uuid": "112dcad8-d94d-529a-9815-8ad85849482d", 00:10:10.211 "is_configured": true, 00:10:10.211 "data_offset": 2048, 00:10:10.211 "data_size": 63488 00:10:10.211 }, 00:10:10.211 { 00:10:10.211 "name": "BaseBdev3", 00:10:10.211 "uuid": "d1d671a7-e1c3-5583-bff9-8d1d757d5690", 00:10:10.211 "is_configured": true, 00:10:10.211 "data_offset": 2048, 00:10:10.211 "data_size": 63488 00:10:10.211 } 00:10:10.211 ] 00:10:10.211 }' 00:10:10.211 18:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.211 18:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.848 18:59:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:10.848 18:59:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.848 18:59:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.848 [2024-11-26 18:59:37.162540] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:10.848 [2024-11-26 18:59:37.162717] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.848 [2024-11-26 18:59:37.166343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.848 [2024-11-26 18:59:37.166529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.848 [2024-11-26 18:59:37.166635] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.848 [2024-11-26 18:59:37.166786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:10.848 18:59:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.848 { 00:10:10.848 "results": [ 00:10:10.848 { 00:10:10.848 "job": "raid_bdev1", 00:10:10.848 "core_mask": "0x1", 00:10:10.848 "workload": "randrw", 00:10:10.848 "percentage": 50, 00:10:10.848 "status": "finished", 00:10:10.848 "queue_depth": 1, 00:10:10.848 "io_size": 131072, 00:10:10.848 "runtime": 1.403744, 00:10:10.848 "iops": 9796.658080105773, 00:10:10.848 "mibps": 1224.5822600132217, 00:10:10.848 "io_failed": 1, 00:10:10.848 "io_timeout": 0, 00:10:10.848 "avg_latency_us": 143.14015031431157, 00:10:10.848 "min_latency_us": 44.21818181818182, 00:10:10.848 "max_latency_us": 1966.08 00:10:10.848 } 00:10:10.848 ], 00:10:10.848 "core_count": 1 00:10:10.848 } 00:10:10.848 18:59:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67507 00:10:10.848 18:59:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67507 ']' 00:10:10.849 18:59:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67507 00:10:10.849 18:59:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:10.849 18:59:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.849 18:59:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67507 00:10:10.849 killing process with pid 67507 00:10:10.849 18:59:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.849 18:59:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.849 18:59:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67507' 00:10:10.849 18:59:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67507 00:10:10.849 [2024-11-26 18:59:37.205748] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.849 18:59:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67507 00:10:10.849 [2024-11-26 18:59:37.431253] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:12.223 18:59:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:12.223 18:59:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mVFt4EDQiS 00:10:12.223 18:59:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:12.223 18:59:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:12.223 18:59:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:12.223 18:59:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:12.223 18:59:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:12.223 18:59:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:12.223 00:10:12.223 real 0m4.917s 00:10:12.223 user 0m5.953s 00:10:12.224 sys 0m0.722s 00:10:12.224 18:59:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.224 ************************************ 00:10:12.224 END TEST raid_read_error_test 00:10:12.224 ************************************ 00:10:12.224 18:59:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.224 18:59:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:10:12.224 18:59:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:12.224 18:59:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.224 18:59:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:12.224 ************************************ 00:10:12.224 START TEST raid_write_error_test 00:10:12.224 ************************************ 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pVEs0Z5yzu 00:10:12.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67658 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67658 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67658 ']' 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.224 18:59:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.224 [2024-11-26 18:59:38.828816] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:10:12.224 [2024-11-26 18:59:38.829008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67658 ] 00:10:12.482 [2024-11-26 18:59:39.016075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.740 [2024-11-26 18:59:39.208358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.999 [2024-11-26 18:59:39.438026] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.999 [2024-11-26 18:59:39.438070] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.257 18:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.257 18:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:13.257 18:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:13.257 18:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:13.257 18:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.257 18:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.515 BaseBdev1_malloc 00:10:13.515 18:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.515 18:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:13.515 18:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.515 18:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.515 true 00:10:13.515 18:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.516 18:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:13.516 18:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.516 18:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.516 [2024-11-26 18:59:39.899037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:13.516 [2024-11-26 18:59:39.899257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.516 [2024-11-26 18:59:39.899348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:13.516 [2024-11-26 18:59:39.899567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.516 [2024-11-26 18:59:39.902530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.516 [2024-11-26 18:59:39.902583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:13.516 BaseBdev1 00:10:13.516 18:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.516 18:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:13.516 18:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:13.516 18:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.516 18:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.516 BaseBdev2_malloc 00:10:13.516 18:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.516 18:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:13.516 18:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.516 18:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.516 true 00:10:13.516 18:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.516 18:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:13.516 18:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.516 18:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.516 [2024-11-26 18:59:39.968019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:13.516 [2024-11-26 18:59:39.968093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.516 [2024-11-26 18:59:39.968119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:13.516 [2024-11-26 18:59:39.968136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.516 [2024-11-26 18:59:39.971096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.516 [2024-11-26 18:59:39.971146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:13.516 BaseBdev2 00:10:13.516 18:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.516 18:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:13.516 18:59:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:13.516 18:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.516 18:59:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.516 BaseBdev3_malloc 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.516 true 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.516 [2024-11-26 18:59:40.044747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:13.516 [2024-11-26 18:59:40.044948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.516 [2024-11-26 18:59:40.044987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:13.516 [2024-11-26 18:59:40.045008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.516 [2024-11-26 18:59:40.047974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.516 [2024-11-26 18:59:40.048137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:13.516 BaseBdev3 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.516 [2024-11-26 18:59:40.056857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.516 [2024-11-26 18:59:40.059453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.516 [2024-11-26 18:59:40.059565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:13.516 [2024-11-26 18:59:40.059842] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:13.516 [2024-11-26 18:59:40.059862] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:13.516 [2024-11-26 18:59:40.060181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:13.516 [2024-11-26 18:59:40.060440] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:13.516 [2024-11-26 18:59:40.060466] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:13.516 [2024-11-26 18:59:40.060654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.516 "name": "raid_bdev1", 00:10:13.516 "uuid": "ab6277d1-3056-4a85-a7cd-bc4f7ba2d48e", 00:10:13.516 "strip_size_kb": 64, 00:10:13.516 "state": "online", 00:10:13.516 "raid_level": "concat", 00:10:13.516 "superblock": true, 00:10:13.516 "num_base_bdevs": 3, 00:10:13.516 "num_base_bdevs_discovered": 3, 00:10:13.516 "num_base_bdevs_operational": 3, 00:10:13.516 "base_bdevs_list": [ 00:10:13.516 { 00:10:13.516 "name": "BaseBdev1", 00:10:13.516 "uuid": "e70c432a-8e8f-500c-8d6e-67ab1580fe58", 00:10:13.516 "is_configured": true, 00:10:13.516 "data_offset": 2048, 00:10:13.516 "data_size": 63488 00:10:13.516 }, 00:10:13.516 { 00:10:13.516 "name": "BaseBdev2", 00:10:13.516 "uuid": "bfa0b941-236c-56a1-82a9-cf1283df186b", 00:10:13.516 "is_configured": true, 00:10:13.516 "data_offset": 2048, 00:10:13.516 "data_size": 63488 00:10:13.516 }, 00:10:13.516 { 00:10:13.516 "name": "BaseBdev3", 00:10:13.516 "uuid": "0653748d-7867-5124-99a5-9a9041ec334f", 00:10:13.516 "is_configured": true, 00:10:13.516 "data_offset": 2048, 00:10:13.516 "data_size": 63488 00:10:13.516 } 00:10:13.516 ] 00:10:13.516 }' 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.516 18:59:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.082 18:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:14.082 18:59:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:14.082 [2024-11-26 18:59:40.618518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.019 "name": "raid_bdev1", 00:10:15.019 "uuid": "ab6277d1-3056-4a85-a7cd-bc4f7ba2d48e", 00:10:15.019 "strip_size_kb": 64, 00:10:15.019 "state": "online", 00:10:15.019 "raid_level": "concat", 00:10:15.019 "superblock": true, 00:10:15.019 "num_base_bdevs": 3, 00:10:15.019 "num_base_bdevs_discovered": 3, 00:10:15.019 "num_base_bdevs_operational": 3, 00:10:15.019 "base_bdevs_list": [ 00:10:15.019 { 00:10:15.019 "name": "BaseBdev1", 00:10:15.019 "uuid": "e70c432a-8e8f-500c-8d6e-67ab1580fe58", 00:10:15.019 "is_configured": true, 00:10:15.019 "data_offset": 2048, 00:10:15.019 "data_size": 63488 00:10:15.019 }, 00:10:15.019 { 00:10:15.019 "name": "BaseBdev2", 00:10:15.019 "uuid": "bfa0b941-236c-56a1-82a9-cf1283df186b", 00:10:15.019 "is_configured": true, 00:10:15.019 "data_offset": 2048, 00:10:15.019 "data_size": 63488 00:10:15.019 }, 00:10:15.019 { 00:10:15.019 "name": "BaseBdev3", 00:10:15.019 "uuid": "0653748d-7867-5124-99a5-9a9041ec334f", 00:10:15.019 "is_configured": true, 00:10:15.019 "data_offset": 2048, 00:10:15.019 "data_size": 63488 00:10:15.019 } 00:10:15.019 ] 00:10:15.019 }' 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.019 18:59:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.586 18:59:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:15.586 18:59:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.586 18:59:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.586 [2024-11-26 18:59:42.048879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:15.586 [2024-11-26 18:59:42.048917] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.586 [2024-11-26 18:59:42.052423] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.586 [2024-11-26 18:59:42.052625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.586 [2024-11-26 18:59:42.052707] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.586 [2024-11-26 18:59:42.052726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:15.586 { 00:10:15.586 "results": [ 00:10:15.586 { 00:10:15.586 "job": "raid_bdev1", 00:10:15.586 "core_mask": "0x1", 00:10:15.586 "workload": "randrw", 00:10:15.586 "percentage": 50, 00:10:15.586 "status": "finished", 00:10:15.586 "queue_depth": 1, 00:10:15.586 "io_size": 131072, 00:10:15.586 "runtime": 1.427742, 00:10:15.586 "iops": 9596.271595288224, 00:10:15.586 "mibps": 1199.533949411028, 00:10:15.586 "io_failed": 1, 00:10:15.586 "io_timeout": 0, 00:10:15.586 "avg_latency_us": 146.35197728267937, 00:10:15.586 "min_latency_us": 40.02909090909091, 00:10:15.586 "max_latency_us": 1906.5018181818182 00:10:15.586 } 00:10:15.586 ], 00:10:15.586 "core_count": 1 00:10:15.586 } 00:10:15.586 18:59:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.586 18:59:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67658 00:10:15.586 18:59:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67658 ']' 00:10:15.586 18:59:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67658 00:10:15.586 18:59:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:15.586 18:59:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.586 18:59:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67658 00:10:15.586 killing process with pid 67658 00:10:15.586 18:59:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.586 18:59:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.586 18:59:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67658' 00:10:15.586 18:59:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67658 00:10:15.586 [2024-11-26 18:59:42.087650] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.586 18:59:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67658 00:10:15.844 [2024-11-26 18:59:42.309831] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:17.232 18:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pVEs0Z5yzu 00:10:17.232 18:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:17.232 18:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:17.232 ************************************ 00:10:17.232 END TEST raid_write_error_test 00:10:17.232 ************************************ 00:10:17.232 18:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:17.232 18:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:17.232 18:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:17.232 18:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:17.232 18:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:17.232 00:10:17.232 real 0m4.836s 00:10:17.232 user 0m5.847s 00:10:17.232 sys 0m0.640s 00:10:17.232 18:59:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.232 18:59:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.232 18:59:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:17.232 18:59:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:10:17.232 18:59:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:17.232 18:59:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.232 18:59:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:17.232 ************************************ 00:10:17.232 START TEST raid_state_function_test 00:10:17.232 ************************************ 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:17.232 Process raid pid: 67796 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67796 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67796' 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67796 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67796 ']' 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.232 18:59:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.232 [2024-11-26 18:59:43.710639] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:10:17.232 [2024-11-26 18:59:43.710994] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.493 [2024-11-26 18:59:43.893548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.493 [2024-11-26 18:59:44.043607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.753 [2024-11-26 18:59:44.281579] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.753 [2024-11-26 18:59:44.281835] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.320 18:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.320 18:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:18.321 18:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:18.321 18:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.321 18:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.321 [2024-11-26 18:59:44.647415] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:18.321 [2024-11-26 18:59:44.647483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:18.321 [2024-11-26 18:59:44.647503] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:18.321 [2024-11-26 18:59:44.647520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:18.321 [2024-11-26 18:59:44.647530] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:18.321 [2024-11-26 18:59:44.647545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:18.321 18:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.321 18:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:18.321 18:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.321 18:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.321 18:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.321 18:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.321 18:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.321 18:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.321 18:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.321 18:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.321 18:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.321 18:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.321 18:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.321 18:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.321 18:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.321 18:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.321 18:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.321 "name": "Existed_Raid", 00:10:18.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.321 "strip_size_kb": 0, 00:10:18.321 "state": "configuring", 00:10:18.321 "raid_level": "raid1", 00:10:18.321 "superblock": false, 00:10:18.321 "num_base_bdevs": 3, 00:10:18.321 "num_base_bdevs_discovered": 0, 00:10:18.321 "num_base_bdevs_operational": 3, 00:10:18.321 "base_bdevs_list": [ 00:10:18.321 { 00:10:18.321 "name": "BaseBdev1", 00:10:18.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.321 "is_configured": false, 00:10:18.321 "data_offset": 0, 00:10:18.321 "data_size": 0 00:10:18.321 }, 00:10:18.321 { 00:10:18.321 "name": "BaseBdev2", 00:10:18.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.321 "is_configured": false, 00:10:18.321 "data_offset": 0, 00:10:18.321 "data_size": 0 00:10:18.321 }, 00:10:18.321 { 00:10:18.321 "name": "BaseBdev3", 00:10:18.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.321 "is_configured": false, 00:10:18.321 "data_offset": 0, 00:10:18.321 "data_size": 0 00:10:18.321 } 00:10:18.321 ] 00:10:18.321 }' 00:10:18.321 18:59:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.321 18:59:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.580 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:18.580 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.580 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.580 [2024-11-26 18:59:45.159503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:18.580 [2024-11-26 18:59:45.159551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:18.580 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.580 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:18.580 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.580 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.580 [2024-11-26 18:59:45.167466] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:18.580 [2024-11-26 18:59:45.167523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:18.580 [2024-11-26 18:59:45.167539] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:18.580 [2024-11-26 18:59:45.167556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:18.580 [2024-11-26 18:59:45.167566] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:18.580 [2024-11-26 18:59:45.167581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:18.580 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.580 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:18.580 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.580 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.840 [2024-11-26 18:59:45.217635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.840 BaseBdev1 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.840 [ 00:10:18.840 { 00:10:18.840 "name": "BaseBdev1", 00:10:18.840 "aliases": [ 00:10:18.840 "cdbde5a2-980f-4502-9fca-d3d3801c2228" 00:10:18.840 ], 00:10:18.840 "product_name": "Malloc disk", 00:10:18.840 "block_size": 512, 00:10:18.840 "num_blocks": 65536, 00:10:18.840 "uuid": "cdbde5a2-980f-4502-9fca-d3d3801c2228", 00:10:18.840 "assigned_rate_limits": { 00:10:18.840 "rw_ios_per_sec": 0, 00:10:18.840 "rw_mbytes_per_sec": 0, 00:10:18.840 "r_mbytes_per_sec": 0, 00:10:18.840 "w_mbytes_per_sec": 0 00:10:18.840 }, 00:10:18.840 "claimed": true, 00:10:18.840 "claim_type": "exclusive_write", 00:10:18.840 "zoned": false, 00:10:18.840 "supported_io_types": { 00:10:18.840 "read": true, 00:10:18.840 "write": true, 00:10:18.840 "unmap": true, 00:10:18.840 "flush": true, 00:10:18.840 "reset": true, 00:10:18.840 "nvme_admin": false, 00:10:18.840 "nvme_io": false, 00:10:18.840 "nvme_io_md": false, 00:10:18.840 "write_zeroes": true, 00:10:18.840 "zcopy": true, 00:10:18.840 "get_zone_info": false, 00:10:18.840 "zone_management": false, 00:10:18.840 "zone_append": false, 00:10:18.840 "compare": false, 00:10:18.840 "compare_and_write": false, 00:10:18.840 "abort": true, 00:10:18.840 "seek_hole": false, 00:10:18.840 "seek_data": false, 00:10:18.840 "copy": true, 00:10:18.840 "nvme_iov_md": false 00:10:18.840 }, 00:10:18.840 "memory_domains": [ 00:10:18.840 { 00:10:18.840 "dma_device_id": "system", 00:10:18.840 "dma_device_type": 1 00:10:18.840 }, 00:10:18.840 { 00:10:18.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.840 "dma_device_type": 2 00:10:18.840 } 00:10:18.840 ], 00:10:18.840 "driver_specific": {} 00:10:18.840 } 00:10:18.840 ] 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.840 "name": "Existed_Raid", 00:10:18.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.840 "strip_size_kb": 0, 00:10:18.840 "state": "configuring", 00:10:18.840 "raid_level": "raid1", 00:10:18.840 "superblock": false, 00:10:18.840 "num_base_bdevs": 3, 00:10:18.840 "num_base_bdevs_discovered": 1, 00:10:18.840 "num_base_bdevs_operational": 3, 00:10:18.840 "base_bdevs_list": [ 00:10:18.840 { 00:10:18.840 "name": "BaseBdev1", 00:10:18.840 "uuid": "cdbde5a2-980f-4502-9fca-d3d3801c2228", 00:10:18.840 "is_configured": true, 00:10:18.840 "data_offset": 0, 00:10:18.840 "data_size": 65536 00:10:18.840 }, 00:10:18.840 { 00:10:18.840 "name": "BaseBdev2", 00:10:18.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.840 "is_configured": false, 00:10:18.840 "data_offset": 0, 00:10:18.840 "data_size": 0 00:10:18.840 }, 00:10:18.840 { 00:10:18.840 "name": "BaseBdev3", 00:10:18.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.840 "is_configured": false, 00:10:18.840 "data_offset": 0, 00:10:18.840 "data_size": 0 00:10:18.840 } 00:10:18.840 ] 00:10:18.840 }' 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.840 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.408 [2024-11-26 18:59:45.765821] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:19.408 [2024-11-26 18:59:45.765891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.408 [2024-11-26 18:59:45.773849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.408 [2024-11-26 18:59:45.776487] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:19.408 [2024-11-26 18:59:45.776675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:19.408 [2024-11-26 18:59:45.776704] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:19.408 [2024-11-26 18:59:45.776722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.408 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.408 "name": "Existed_Raid", 00:10:19.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.408 "strip_size_kb": 0, 00:10:19.408 "state": "configuring", 00:10:19.408 "raid_level": "raid1", 00:10:19.408 "superblock": false, 00:10:19.408 "num_base_bdevs": 3, 00:10:19.408 "num_base_bdevs_discovered": 1, 00:10:19.408 "num_base_bdevs_operational": 3, 00:10:19.408 "base_bdevs_list": [ 00:10:19.409 { 00:10:19.409 "name": "BaseBdev1", 00:10:19.409 "uuid": "cdbde5a2-980f-4502-9fca-d3d3801c2228", 00:10:19.409 "is_configured": true, 00:10:19.409 "data_offset": 0, 00:10:19.409 "data_size": 65536 00:10:19.409 }, 00:10:19.409 { 00:10:19.409 "name": "BaseBdev2", 00:10:19.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.409 "is_configured": false, 00:10:19.409 "data_offset": 0, 00:10:19.409 "data_size": 0 00:10:19.409 }, 00:10:19.409 { 00:10:19.409 "name": "BaseBdev3", 00:10:19.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.409 "is_configured": false, 00:10:19.409 "data_offset": 0, 00:10:19.409 "data_size": 0 00:10:19.409 } 00:10:19.409 ] 00:10:19.409 }' 00:10:19.409 18:59:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.409 18:59:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.976 [2024-11-26 18:59:46.340901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.976 BaseBdev2 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.976 [ 00:10:19.976 { 00:10:19.976 "name": "BaseBdev2", 00:10:19.976 "aliases": [ 00:10:19.976 "2b360a8b-46a9-46b2-b2bc-0e28d00ce289" 00:10:19.976 ], 00:10:19.976 "product_name": "Malloc disk", 00:10:19.976 "block_size": 512, 00:10:19.976 "num_blocks": 65536, 00:10:19.976 "uuid": "2b360a8b-46a9-46b2-b2bc-0e28d00ce289", 00:10:19.976 "assigned_rate_limits": { 00:10:19.976 "rw_ios_per_sec": 0, 00:10:19.976 "rw_mbytes_per_sec": 0, 00:10:19.976 "r_mbytes_per_sec": 0, 00:10:19.976 "w_mbytes_per_sec": 0 00:10:19.976 }, 00:10:19.976 "claimed": true, 00:10:19.976 "claim_type": "exclusive_write", 00:10:19.976 "zoned": false, 00:10:19.976 "supported_io_types": { 00:10:19.976 "read": true, 00:10:19.976 "write": true, 00:10:19.976 "unmap": true, 00:10:19.976 "flush": true, 00:10:19.976 "reset": true, 00:10:19.976 "nvme_admin": false, 00:10:19.976 "nvme_io": false, 00:10:19.976 "nvme_io_md": false, 00:10:19.976 "write_zeroes": true, 00:10:19.976 "zcopy": true, 00:10:19.976 "get_zone_info": false, 00:10:19.976 "zone_management": false, 00:10:19.976 "zone_append": false, 00:10:19.976 "compare": false, 00:10:19.976 "compare_and_write": false, 00:10:19.976 "abort": true, 00:10:19.976 "seek_hole": false, 00:10:19.976 "seek_data": false, 00:10:19.976 "copy": true, 00:10:19.976 "nvme_iov_md": false 00:10:19.976 }, 00:10:19.976 "memory_domains": [ 00:10:19.976 { 00:10:19.976 "dma_device_id": "system", 00:10:19.976 "dma_device_type": 1 00:10:19.976 }, 00:10:19.976 { 00:10:19.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.976 "dma_device_type": 2 00:10:19.976 } 00:10:19.976 ], 00:10:19.976 "driver_specific": {} 00:10:19.976 } 00:10:19.976 ] 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.976 "name": "Existed_Raid", 00:10:19.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.976 "strip_size_kb": 0, 00:10:19.976 "state": "configuring", 00:10:19.976 "raid_level": "raid1", 00:10:19.976 "superblock": false, 00:10:19.976 "num_base_bdevs": 3, 00:10:19.976 "num_base_bdevs_discovered": 2, 00:10:19.976 "num_base_bdevs_operational": 3, 00:10:19.976 "base_bdevs_list": [ 00:10:19.976 { 00:10:19.976 "name": "BaseBdev1", 00:10:19.976 "uuid": "cdbde5a2-980f-4502-9fca-d3d3801c2228", 00:10:19.976 "is_configured": true, 00:10:19.976 "data_offset": 0, 00:10:19.976 "data_size": 65536 00:10:19.976 }, 00:10:19.976 { 00:10:19.976 "name": "BaseBdev2", 00:10:19.976 "uuid": "2b360a8b-46a9-46b2-b2bc-0e28d00ce289", 00:10:19.976 "is_configured": true, 00:10:19.976 "data_offset": 0, 00:10:19.976 "data_size": 65536 00:10:19.976 }, 00:10:19.976 { 00:10:19.976 "name": "BaseBdev3", 00:10:19.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.976 "is_configured": false, 00:10:19.976 "data_offset": 0, 00:10:19.976 "data_size": 0 00:10:19.976 } 00:10:19.976 ] 00:10:19.976 }' 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.976 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.543 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:20.543 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.543 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.543 [2024-11-26 18:59:46.949222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:20.543 [2024-11-26 18:59:46.949522] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:20.543 [2024-11-26 18:59:46.949557] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:20.543 [2024-11-26 18:59:46.950121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:20.543 [2024-11-26 18:59:46.950397] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:20.543 [2024-11-26 18:59:46.950415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:20.543 [2024-11-26 18:59:46.950763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.543 BaseBdev3 00:10:20.543 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.543 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:20.543 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:20.543 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.543 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:20.543 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.543 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.543 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.544 [ 00:10:20.544 { 00:10:20.544 "name": "BaseBdev3", 00:10:20.544 "aliases": [ 00:10:20.544 "b141bf46-0cdb-4c90-bfd8-fab9a987dca9" 00:10:20.544 ], 00:10:20.544 "product_name": "Malloc disk", 00:10:20.544 "block_size": 512, 00:10:20.544 "num_blocks": 65536, 00:10:20.544 "uuid": "b141bf46-0cdb-4c90-bfd8-fab9a987dca9", 00:10:20.544 "assigned_rate_limits": { 00:10:20.544 "rw_ios_per_sec": 0, 00:10:20.544 "rw_mbytes_per_sec": 0, 00:10:20.544 "r_mbytes_per_sec": 0, 00:10:20.544 "w_mbytes_per_sec": 0 00:10:20.544 }, 00:10:20.544 "claimed": true, 00:10:20.544 "claim_type": "exclusive_write", 00:10:20.544 "zoned": false, 00:10:20.544 "supported_io_types": { 00:10:20.544 "read": true, 00:10:20.544 "write": true, 00:10:20.544 "unmap": true, 00:10:20.544 "flush": true, 00:10:20.544 "reset": true, 00:10:20.544 "nvme_admin": false, 00:10:20.544 "nvme_io": false, 00:10:20.544 "nvme_io_md": false, 00:10:20.544 "write_zeroes": true, 00:10:20.544 "zcopy": true, 00:10:20.544 "get_zone_info": false, 00:10:20.544 "zone_management": false, 00:10:20.544 "zone_append": false, 00:10:20.544 "compare": false, 00:10:20.544 "compare_and_write": false, 00:10:20.544 "abort": true, 00:10:20.544 "seek_hole": false, 00:10:20.544 "seek_data": false, 00:10:20.544 "copy": true, 00:10:20.544 "nvme_iov_md": false 00:10:20.544 }, 00:10:20.544 "memory_domains": [ 00:10:20.544 { 00:10:20.544 "dma_device_id": "system", 00:10:20.544 "dma_device_type": 1 00:10:20.544 }, 00:10:20.544 { 00:10:20.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.544 "dma_device_type": 2 00:10:20.544 } 00:10:20.544 ], 00:10:20.544 "driver_specific": {} 00:10:20.544 } 00:10:20.544 ] 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.544 18:59:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.544 18:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.544 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.544 "name": "Existed_Raid", 00:10:20.544 "uuid": "4481027d-d485-48c9-82a6-5a26afe3129b", 00:10:20.544 "strip_size_kb": 0, 00:10:20.544 "state": "online", 00:10:20.544 "raid_level": "raid1", 00:10:20.544 "superblock": false, 00:10:20.544 "num_base_bdevs": 3, 00:10:20.544 "num_base_bdevs_discovered": 3, 00:10:20.544 "num_base_bdevs_operational": 3, 00:10:20.544 "base_bdevs_list": [ 00:10:20.544 { 00:10:20.544 "name": "BaseBdev1", 00:10:20.544 "uuid": "cdbde5a2-980f-4502-9fca-d3d3801c2228", 00:10:20.544 "is_configured": true, 00:10:20.544 "data_offset": 0, 00:10:20.544 "data_size": 65536 00:10:20.544 }, 00:10:20.544 { 00:10:20.544 "name": "BaseBdev2", 00:10:20.544 "uuid": "2b360a8b-46a9-46b2-b2bc-0e28d00ce289", 00:10:20.544 "is_configured": true, 00:10:20.544 "data_offset": 0, 00:10:20.544 "data_size": 65536 00:10:20.544 }, 00:10:20.544 { 00:10:20.544 "name": "BaseBdev3", 00:10:20.544 "uuid": "b141bf46-0cdb-4c90-bfd8-fab9a987dca9", 00:10:20.544 "is_configured": true, 00:10:20.544 "data_offset": 0, 00:10:20.544 "data_size": 65536 00:10:20.544 } 00:10:20.544 ] 00:10:20.544 }' 00:10:20.544 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.544 18:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.110 [2024-11-26 18:59:47.505849] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:21.110 "name": "Existed_Raid", 00:10:21.110 "aliases": [ 00:10:21.110 "4481027d-d485-48c9-82a6-5a26afe3129b" 00:10:21.110 ], 00:10:21.110 "product_name": "Raid Volume", 00:10:21.110 "block_size": 512, 00:10:21.110 "num_blocks": 65536, 00:10:21.110 "uuid": "4481027d-d485-48c9-82a6-5a26afe3129b", 00:10:21.110 "assigned_rate_limits": { 00:10:21.110 "rw_ios_per_sec": 0, 00:10:21.110 "rw_mbytes_per_sec": 0, 00:10:21.110 "r_mbytes_per_sec": 0, 00:10:21.110 "w_mbytes_per_sec": 0 00:10:21.110 }, 00:10:21.110 "claimed": false, 00:10:21.110 "zoned": false, 00:10:21.110 "supported_io_types": { 00:10:21.110 "read": true, 00:10:21.110 "write": true, 00:10:21.110 "unmap": false, 00:10:21.110 "flush": false, 00:10:21.110 "reset": true, 00:10:21.110 "nvme_admin": false, 00:10:21.110 "nvme_io": false, 00:10:21.110 "nvme_io_md": false, 00:10:21.110 "write_zeroes": true, 00:10:21.110 "zcopy": false, 00:10:21.110 "get_zone_info": false, 00:10:21.110 "zone_management": false, 00:10:21.110 "zone_append": false, 00:10:21.110 "compare": false, 00:10:21.110 "compare_and_write": false, 00:10:21.110 "abort": false, 00:10:21.110 "seek_hole": false, 00:10:21.110 "seek_data": false, 00:10:21.110 "copy": false, 00:10:21.110 "nvme_iov_md": false 00:10:21.110 }, 00:10:21.110 "memory_domains": [ 00:10:21.110 { 00:10:21.110 "dma_device_id": "system", 00:10:21.110 "dma_device_type": 1 00:10:21.110 }, 00:10:21.110 { 00:10:21.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.110 "dma_device_type": 2 00:10:21.110 }, 00:10:21.110 { 00:10:21.110 "dma_device_id": "system", 00:10:21.110 "dma_device_type": 1 00:10:21.110 }, 00:10:21.110 { 00:10:21.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.110 "dma_device_type": 2 00:10:21.110 }, 00:10:21.110 { 00:10:21.110 "dma_device_id": "system", 00:10:21.110 "dma_device_type": 1 00:10:21.110 }, 00:10:21.110 { 00:10:21.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.110 "dma_device_type": 2 00:10:21.110 } 00:10:21.110 ], 00:10:21.110 "driver_specific": { 00:10:21.110 "raid": { 00:10:21.110 "uuid": "4481027d-d485-48c9-82a6-5a26afe3129b", 00:10:21.110 "strip_size_kb": 0, 00:10:21.110 "state": "online", 00:10:21.110 "raid_level": "raid1", 00:10:21.110 "superblock": false, 00:10:21.110 "num_base_bdevs": 3, 00:10:21.110 "num_base_bdevs_discovered": 3, 00:10:21.110 "num_base_bdevs_operational": 3, 00:10:21.110 "base_bdevs_list": [ 00:10:21.110 { 00:10:21.110 "name": "BaseBdev1", 00:10:21.110 "uuid": "cdbde5a2-980f-4502-9fca-d3d3801c2228", 00:10:21.110 "is_configured": true, 00:10:21.110 "data_offset": 0, 00:10:21.110 "data_size": 65536 00:10:21.110 }, 00:10:21.110 { 00:10:21.110 "name": "BaseBdev2", 00:10:21.110 "uuid": "2b360a8b-46a9-46b2-b2bc-0e28d00ce289", 00:10:21.110 "is_configured": true, 00:10:21.110 "data_offset": 0, 00:10:21.110 "data_size": 65536 00:10:21.110 }, 00:10:21.110 { 00:10:21.110 "name": "BaseBdev3", 00:10:21.110 "uuid": "b141bf46-0cdb-4c90-bfd8-fab9a987dca9", 00:10:21.110 "is_configured": true, 00:10:21.110 "data_offset": 0, 00:10:21.110 "data_size": 65536 00:10:21.110 } 00:10:21.110 ] 00:10:21.110 } 00:10:21.110 } 00:10:21.110 }' 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:21.110 BaseBdev2 00:10:21.110 BaseBdev3' 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.110 18:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.369 [2024-11-26 18:59:47.813593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.369 "name": "Existed_Raid", 00:10:21.369 "uuid": "4481027d-d485-48c9-82a6-5a26afe3129b", 00:10:21.369 "strip_size_kb": 0, 00:10:21.369 "state": "online", 00:10:21.369 "raid_level": "raid1", 00:10:21.369 "superblock": false, 00:10:21.369 "num_base_bdevs": 3, 00:10:21.369 "num_base_bdevs_discovered": 2, 00:10:21.369 "num_base_bdevs_operational": 2, 00:10:21.369 "base_bdevs_list": [ 00:10:21.369 { 00:10:21.369 "name": null, 00:10:21.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.369 "is_configured": false, 00:10:21.369 "data_offset": 0, 00:10:21.369 "data_size": 65536 00:10:21.369 }, 00:10:21.369 { 00:10:21.369 "name": "BaseBdev2", 00:10:21.369 "uuid": "2b360a8b-46a9-46b2-b2bc-0e28d00ce289", 00:10:21.369 "is_configured": true, 00:10:21.369 "data_offset": 0, 00:10:21.369 "data_size": 65536 00:10:21.369 }, 00:10:21.369 { 00:10:21.369 "name": "BaseBdev3", 00:10:21.369 "uuid": "b141bf46-0cdb-4c90-bfd8-fab9a987dca9", 00:10:21.369 "is_configured": true, 00:10:21.369 "data_offset": 0, 00:10:21.369 "data_size": 65536 00:10:21.369 } 00:10:21.369 ] 00:10:21.369 }' 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.369 18:59:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.935 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:21.935 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:21.935 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.935 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:21.935 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.935 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.935 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.935 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:21.935 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:21.935 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:21.935 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.935 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.935 [2024-11-26 18:59:48.476852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.193 [2024-11-26 18:59:48.628912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:22.193 [2024-11-26 18:59:48.629054] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.193 [2024-11-26 18:59:48.722839] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.193 [2024-11-26 18:59:48.722916] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.193 [2024-11-26 18:59:48.722938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.193 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.452 BaseBdev2 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.452 [ 00:10:22.452 { 00:10:22.452 "name": "BaseBdev2", 00:10:22.452 "aliases": [ 00:10:22.452 "a2eeb176-fa08-4229-b29a-66845cda2fdb" 00:10:22.452 ], 00:10:22.452 "product_name": "Malloc disk", 00:10:22.452 "block_size": 512, 00:10:22.452 "num_blocks": 65536, 00:10:22.452 "uuid": "a2eeb176-fa08-4229-b29a-66845cda2fdb", 00:10:22.452 "assigned_rate_limits": { 00:10:22.452 "rw_ios_per_sec": 0, 00:10:22.452 "rw_mbytes_per_sec": 0, 00:10:22.452 "r_mbytes_per_sec": 0, 00:10:22.452 "w_mbytes_per_sec": 0 00:10:22.452 }, 00:10:22.452 "claimed": false, 00:10:22.452 "zoned": false, 00:10:22.452 "supported_io_types": { 00:10:22.452 "read": true, 00:10:22.452 "write": true, 00:10:22.452 "unmap": true, 00:10:22.452 "flush": true, 00:10:22.452 "reset": true, 00:10:22.452 "nvme_admin": false, 00:10:22.452 "nvme_io": false, 00:10:22.452 "nvme_io_md": false, 00:10:22.452 "write_zeroes": true, 00:10:22.452 "zcopy": true, 00:10:22.452 "get_zone_info": false, 00:10:22.452 "zone_management": false, 00:10:22.452 "zone_append": false, 00:10:22.452 "compare": false, 00:10:22.452 "compare_and_write": false, 00:10:22.452 "abort": true, 00:10:22.452 "seek_hole": false, 00:10:22.452 "seek_data": false, 00:10:22.452 "copy": true, 00:10:22.452 "nvme_iov_md": false 00:10:22.452 }, 00:10:22.452 "memory_domains": [ 00:10:22.452 { 00:10:22.452 "dma_device_id": "system", 00:10:22.452 "dma_device_type": 1 00:10:22.452 }, 00:10:22.452 { 00:10:22.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.452 "dma_device_type": 2 00:10:22.452 } 00:10:22.452 ], 00:10:22.452 "driver_specific": {} 00:10:22.452 } 00:10:22.452 ] 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.452 BaseBdev3 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.452 [ 00:10:22.452 { 00:10:22.452 "name": "BaseBdev3", 00:10:22.452 "aliases": [ 00:10:22.452 "41fcb400-abc9-4c98-849f-50b1c396d760" 00:10:22.452 ], 00:10:22.452 "product_name": "Malloc disk", 00:10:22.452 "block_size": 512, 00:10:22.452 "num_blocks": 65536, 00:10:22.452 "uuid": "41fcb400-abc9-4c98-849f-50b1c396d760", 00:10:22.452 "assigned_rate_limits": { 00:10:22.452 "rw_ios_per_sec": 0, 00:10:22.452 "rw_mbytes_per_sec": 0, 00:10:22.452 "r_mbytes_per_sec": 0, 00:10:22.452 "w_mbytes_per_sec": 0 00:10:22.452 }, 00:10:22.452 "claimed": false, 00:10:22.452 "zoned": false, 00:10:22.452 "supported_io_types": { 00:10:22.452 "read": true, 00:10:22.452 "write": true, 00:10:22.452 "unmap": true, 00:10:22.452 "flush": true, 00:10:22.452 "reset": true, 00:10:22.452 "nvme_admin": false, 00:10:22.452 "nvme_io": false, 00:10:22.452 "nvme_io_md": false, 00:10:22.452 "write_zeroes": true, 00:10:22.452 "zcopy": true, 00:10:22.452 "get_zone_info": false, 00:10:22.452 "zone_management": false, 00:10:22.452 "zone_append": false, 00:10:22.452 "compare": false, 00:10:22.452 "compare_and_write": false, 00:10:22.452 "abort": true, 00:10:22.452 "seek_hole": false, 00:10:22.452 "seek_data": false, 00:10:22.452 "copy": true, 00:10:22.452 "nvme_iov_md": false 00:10:22.452 }, 00:10:22.452 "memory_domains": [ 00:10:22.452 { 00:10:22.452 "dma_device_id": "system", 00:10:22.452 "dma_device_type": 1 00:10:22.452 }, 00:10:22.452 { 00:10:22.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.452 "dma_device_type": 2 00:10:22.452 } 00:10:22.452 ], 00:10:22.452 "driver_specific": {} 00:10:22.452 } 00:10:22.452 ] 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:22.452 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:22.453 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:22.453 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.453 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.453 [2024-11-26 18:59:48.949035] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:22.453 [2024-11-26 18:59:48.949235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:22.453 [2024-11-26 18:59:48.949412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:22.453 [2024-11-26 18:59:48.952034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:22.453 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.453 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:22.453 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.453 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.453 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.453 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.453 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.453 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.453 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.453 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.453 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.453 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.453 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.453 18:59:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.453 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.453 18:59:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.453 18:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.453 "name": "Existed_Raid", 00:10:22.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.453 "strip_size_kb": 0, 00:10:22.453 "state": "configuring", 00:10:22.453 "raid_level": "raid1", 00:10:22.453 "superblock": false, 00:10:22.453 "num_base_bdevs": 3, 00:10:22.453 "num_base_bdevs_discovered": 2, 00:10:22.453 "num_base_bdevs_operational": 3, 00:10:22.453 "base_bdevs_list": [ 00:10:22.453 { 00:10:22.453 "name": "BaseBdev1", 00:10:22.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.453 "is_configured": false, 00:10:22.453 "data_offset": 0, 00:10:22.453 "data_size": 0 00:10:22.453 }, 00:10:22.453 { 00:10:22.453 "name": "BaseBdev2", 00:10:22.453 "uuid": "a2eeb176-fa08-4229-b29a-66845cda2fdb", 00:10:22.453 "is_configured": true, 00:10:22.453 "data_offset": 0, 00:10:22.453 "data_size": 65536 00:10:22.453 }, 00:10:22.453 { 00:10:22.453 "name": "BaseBdev3", 00:10:22.453 "uuid": "41fcb400-abc9-4c98-849f-50b1c396d760", 00:10:22.453 "is_configured": true, 00:10:22.453 "data_offset": 0, 00:10:22.453 "data_size": 65536 00:10:22.453 } 00:10:22.453 ] 00:10:22.453 }' 00:10:22.453 18:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.453 18:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.020 18:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:23.020 18:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.020 18:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.020 [2024-11-26 18:59:49.453239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:23.020 18:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.020 18:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:23.020 18:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.020 18:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.020 18:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.020 18:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.020 18:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.020 18:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.020 18:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.020 18:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.020 18:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.020 18:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.020 18:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.020 18:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.020 18:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.020 18:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.020 18:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.020 "name": "Existed_Raid", 00:10:23.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.020 "strip_size_kb": 0, 00:10:23.020 "state": "configuring", 00:10:23.020 "raid_level": "raid1", 00:10:23.020 "superblock": false, 00:10:23.020 "num_base_bdevs": 3, 00:10:23.020 "num_base_bdevs_discovered": 1, 00:10:23.020 "num_base_bdevs_operational": 3, 00:10:23.020 "base_bdevs_list": [ 00:10:23.020 { 00:10:23.020 "name": "BaseBdev1", 00:10:23.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.020 "is_configured": false, 00:10:23.020 "data_offset": 0, 00:10:23.020 "data_size": 0 00:10:23.020 }, 00:10:23.020 { 00:10:23.020 "name": null, 00:10:23.020 "uuid": "a2eeb176-fa08-4229-b29a-66845cda2fdb", 00:10:23.020 "is_configured": false, 00:10:23.020 "data_offset": 0, 00:10:23.020 "data_size": 65536 00:10:23.020 }, 00:10:23.020 { 00:10:23.020 "name": "BaseBdev3", 00:10:23.020 "uuid": "41fcb400-abc9-4c98-849f-50b1c396d760", 00:10:23.020 "is_configured": true, 00:10:23.020 "data_offset": 0, 00:10:23.020 "data_size": 65536 00:10:23.020 } 00:10:23.020 ] 00:10:23.020 }' 00:10:23.020 18:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.020 18:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.661 18:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.661 18:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.661 18:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.661 18:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:23.661 18:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.661 18:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:23.661 18:59:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:23.661 18:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.661 18:59:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.661 [2024-11-26 18:59:50.044213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.661 BaseBdev1 00:10:23.661 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.661 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.662 [ 00:10:23.662 { 00:10:23.662 "name": "BaseBdev1", 00:10:23.662 "aliases": [ 00:10:23.662 "d4da0c32-d16a-4d74-a544-c4d97469b842" 00:10:23.662 ], 00:10:23.662 "product_name": "Malloc disk", 00:10:23.662 "block_size": 512, 00:10:23.662 "num_blocks": 65536, 00:10:23.662 "uuid": "d4da0c32-d16a-4d74-a544-c4d97469b842", 00:10:23.662 "assigned_rate_limits": { 00:10:23.662 "rw_ios_per_sec": 0, 00:10:23.662 "rw_mbytes_per_sec": 0, 00:10:23.662 "r_mbytes_per_sec": 0, 00:10:23.662 "w_mbytes_per_sec": 0 00:10:23.662 }, 00:10:23.662 "claimed": true, 00:10:23.662 "claim_type": "exclusive_write", 00:10:23.662 "zoned": false, 00:10:23.662 "supported_io_types": { 00:10:23.662 "read": true, 00:10:23.662 "write": true, 00:10:23.662 "unmap": true, 00:10:23.662 "flush": true, 00:10:23.662 "reset": true, 00:10:23.662 "nvme_admin": false, 00:10:23.662 "nvme_io": false, 00:10:23.662 "nvme_io_md": false, 00:10:23.662 "write_zeroes": true, 00:10:23.662 "zcopy": true, 00:10:23.662 "get_zone_info": false, 00:10:23.662 "zone_management": false, 00:10:23.662 "zone_append": false, 00:10:23.662 "compare": false, 00:10:23.662 "compare_and_write": false, 00:10:23.662 "abort": true, 00:10:23.662 "seek_hole": false, 00:10:23.662 "seek_data": false, 00:10:23.662 "copy": true, 00:10:23.662 "nvme_iov_md": false 00:10:23.662 }, 00:10:23.662 "memory_domains": [ 00:10:23.662 { 00:10:23.662 "dma_device_id": "system", 00:10:23.662 "dma_device_type": 1 00:10:23.662 }, 00:10:23.662 { 00:10:23.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.662 "dma_device_type": 2 00:10:23.662 } 00:10:23.662 ], 00:10:23.662 "driver_specific": {} 00:10:23.662 } 00:10:23.662 ] 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.662 "name": "Existed_Raid", 00:10:23.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.662 "strip_size_kb": 0, 00:10:23.662 "state": "configuring", 00:10:23.662 "raid_level": "raid1", 00:10:23.662 "superblock": false, 00:10:23.662 "num_base_bdevs": 3, 00:10:23.662 "num_base_bdevs_discovered": 2, 00:10:23.662 "num_base_bdevs_operational": 3, 00:10:23.662 "base_bdevs_list": [ 00:10:23.662 { 00:10:23.662 "name": "BaseBdev1", 00:10:23.662 "uuid": "d4da0c32-d16a-4d74-a544-c4d97469b842", 00:10:23.662 "is_configured": true, 00:10:23.662 "data_offset": 0, 00:10:23.662 "data_size": 65536 00:10:23.662 }, 00:10:23.662 { 00:10:23.662 "name": null, 00:10:23.662 "uuid": "a2eeb176-fa08-4229-b29a-66845cda2fdb", 00:10:23.662 "is_configured": false, 00:10:23.662 "data_offset": 0, 00:10:23.662 "data_size": 65536 00:10:23.662 }, 00:10:23.662 { 00:10:23.662 "name": "BaseBdev3", 00:10:23.662 "uuid": "41fcb400-abc9-4c98-849f-50b1c396d760", 00:10:23.662 "is_configured": true, 00:10:23.662 "data_offset": 0, 00:10:23.662 "data_size": 65536 00:10:23.662 } 00:10:23.662 ] 00:10:23.662 }' 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.662 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.228 [2024-11-26 18:59:50.636442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.228 "name": "Existed_Raid", 00:10:24.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.228 "strip_size_kb": 0, 00:10:24.228 "state": "configuring", 00:10:24.228 "raid_level": "raid1", 00:10:24.228 "superblock": false, 00:10:24.228 "num_base_bdevs": 3, 00:10:24.228 "num_base_bdevs_discovered": 1, 00:10:24.228 "num_base_bdevs_operational": 3, 00:10:24.228 "base_bdevs_list": [ 00:10:24.228 { 00:10:24.228 "name": "BaseBdev1", 00:10:24.228 "uuid": "d4da0c32-d16a-4d74-a544-c4d97469b842", 00:10:24.228 "is_configured": true, 00:10:24.228 "data_offset": 0, 00:10:24.228 "data_size": 65536 00:10:24.228 }, 00:10:24.228 { 00:10:24.228 "name": null, 00:10:24.228 "uuid": "a2eeb176-fa08-4229-b29a-66845cda2fdb", 00:10:24.228 "is_configured": false, 00:10:24.228 "data_offset": 0, 00:10:24.228 "data_size": 65536 00:10:24.228 }, 00:10:24.228 { 00:10:24.228 "name": null, 00:10:24.228 "uuid": "41fcb400-abc9-4c98-849f-50b1c396d760", 00:10:24.228 "is_configured": false, 00:10:24.228 "data_offset": 0, 00:10:24.228 "data_size": 65536 00:10:24.228 } 00:10:24.228 ] 00:10:24.228 }' 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.228 18:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.796 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:24.796 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.796 18:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.796 18:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.796 18:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.796 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:24.796 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:24.796 18:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.796 18:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.796 [2024-11-26 18:59:51.212636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:24.796 18:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.796 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:24.796 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.796 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.796 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.796 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.796 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.796 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.796 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.796 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.796 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.797 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.797 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.797 18:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.797 18:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.797 18:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.797 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.797 "name": "Existed_Raid", 00:10:24.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.797 "strip_size_kb": 0, 00:10:24.797 "state": "configuring", 00:10:24.797 "raid_level": "raid1", 00:10:24.797 "superblock": false, 00:10:24.797 "num_base_bdevs": 3, 00:10:24.797 "num_base_bdevs_discovered": 2, 00:10:24.797 "num_base_bdevs_operational": 3, 00:10:24.797 "base_bdevs_list": [ 00:10:24.797 { 00:10:24.797 "name": "BaseBdev1", 00:10:24.797 "uuid": "d4da0c32-d16a-4d74-a544-c4d97469b842", 00:10:24.797 "is_configured": true, 00:10:24.797 "data_offset": 0, 00:10:24.797 "data_size": 65536 00:10:24.797 }, 00:10:24.797 { 00:10:24.797 "name": null, 00:10:24.797 "uuid": "a2eeb176-fa08-4229-b29a-66845cda2fdb", 00:10:24.797 "is_configured": false, 00:10:24.797 "data_offset": 0, 00:10:24.797 "data_size": 65536 00:10:24.797 }, 00:10:24.797 { 00:10:24.797 "name": "BaseBdev3", 00:10:24.797 "uuid": "41fcb400-abc9-4c98-849f-50b1c396d760", 00:10:24.797 "is_configured": true, 00:10:24.797 "data_offset": 0, 00:10:24.797 "data_size": 65536 00:10:24.797 } 00:10:24.797 ] 00:10:24.797 }' 00:10:24.797 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.797 18:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.363 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:25.363 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.363 18:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.363 18:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.363 18:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.363 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:25.363 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:25.363 18:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.363 18:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.363 [2024-11-26 18:59:51.760798] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:25.363 18:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.363 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:25.363 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.363 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.363 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.363 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.363 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.364 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.364 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.364 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.364 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.364 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.364 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.364 18:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.364 18:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.364 18:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.364 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.364 "name": "Existed_Raid", 00:10:25.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.364 "strip_size_kb": 0, 00:10:25.364 "state": "configuring", 00:10:25.364 "raid_level": "raid1", 00:10:25.364 "superblock": false, 00:10:25.364 "num_base_bdevs": 3, 00:10:25.364 "num_base_bdevs_discovered": 1, 00:10:25.364 "num_base_bdevs_operational": 3, 00:10:25.364 "base_bdevs_list": [ 00:10:25.364 { 00:10:25.364 "name": null, 00:10:25.364 "uuid": "d4da0c32-d16a-4d74-a544-c4d97469b842", 00:10:25.364 "is_configured": false, 00:10:25.364 "data_offset": 0, 00:10:25.364 "data_size": 65536 00:10:25.364 }, 00:10:25.364 { 00:10:25.364 "name": null, 00:10:25.364 "uuid": "a2eeb176-fa08-4229-b29a-66845cda2fdb", 00:10:25.364 "is_configured": false, 00:10:25.364 "data_offset": 0, 00:10:25.364 "data_size": 65536 00:10:25.364 }, 00:10:25.364 { 00:10:25.364 "name": "BaseBdev3", 00:10:25.364 "uuid": "41fcb400-abc9-4c98-849f-50b1c396d760", 00:10:25.364 "is_configured": true, 00:10:25.364 "data_offset": 0, 00:10:25.364 "data_size": 65536 00:10:25.364 } 00:10:25.364 ] 00:10:25.364 }' 00:10:25.364 18:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.364 18:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.931 [2024-11-26 18:59:52.445469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.931 "name": "Existed_Raid", 00:10:25.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.931 "strip_size_kb": 0, 00:10:25.931 "state": "configuring", 00:10:25.931 "raid_level": "raid1", 00:10:25.931 "superblock": false, 00:10:25.931 "num_base_bdevs": 3, 00:10:25.931 "num_base_bdevs_discovered": 2, 00:10:25.931 "num_base_bdevs_operational": 3, 00:10:25.931 "base_bdevs_list": [ 00:10:25.931 { 00:10:25.931 "name": null, 00:10:25.931 "uuid": "d4da0c32-d16a-4d74-a544-c4d97469b842", 00:10:25.931 "is_configured": false, 00:10:25.931 "data_offset": 0, 00:10:25.931 "data_size": 65536 00:10:25.931 }, 00:10:25.931 { 00:10:25.931 "name": "BaseBdev2", 00:10:25.931 "uuid": "a2eeb176-fa08-4229-b29a-66845cda2fdb", 00:10:25.931 "is_configured": true, 00:10:25.931 "data_offset": 0, 00:10:25.931 "data_size": 65536 00:10:25.931 }, 00:10:25.931 { 00:10:25.931 "name": "BaseBdev3", 00:10:25.931 "uuid": "41fcb400-abc9-4c98-849f-50b1c396d760", 00:10:25.931 "is_configured": true, 00:10:25.931 "data_offset": 0, 00:10:25.931 "data_size": 65536 00:10:25.931 } 00:10:25.931 ] 00:10:25.931 }' 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.931 18:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.498 18:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.498 18:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.498 18:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.498 18:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:26.498 18:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.498 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:26.498 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.498 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:26.498 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.498 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.498 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.498 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d4da0c32-d16a-4d74-a544-c4d97469b842 00:10:26.498 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.498 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.757 [2024-11-26 18:59:53.123443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:26.757 [2024-11-26 18:59:53.123524] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:26.757 [2024-11-26 18:59:53.123536] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:26.757 [2024-11-26 18:59:53.123906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:26.757 [2024-11-26 18:59:53.124152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:26.757 [2024-11-26 18:59:53.124174] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:26.757 [2024-11-26 18:59:53.124609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.757 NewBaseBdev 00:10:26.757 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.757 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:26.757 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:26.757 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.757 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:26.757 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.757 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.757 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.757 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.757 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.757 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.757 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:26.757 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.757 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.757 [ 00:10:26.757 { 00:10:26.757 "name": "NewBaseBdev", 00:10:26.757 "aliases": [ 00:10:26.757 "d4da0c32-d16a-4d74-a544-c4d97469b842" 00:10:26.757 ], 00:10:26.757 "product_name": "Malloc disk", 00:10:26.757 "block_size": 512, 00:10:26.757 "num_blocks": 65536, 00:10:26.757 "uuid": "d4da0c32-d16a-4d74-a544-c4d97469b842", 00:10:26.757 "assigned_rate_limits": { 00:10:26.758 "rw_ios_per_sec": 0, 00:10:26.758 "rw_mbytes_per_sec": 0, 00:10:26.758 "r_mbytes_per_sec": 0, 00:10:26.758 "w_mbytes_per_sec": 0 00:10:26.758 }, 00:10:26.758 "claimed": true, 00:10:26.758 "claim_type": "exclusive_write", 00:10:26.758 "zoned": false, 00:10:26.758 "supported_io_types": { 00:10:26.758 "read": true, 00:10:26.758 "write": true, 00:10:26.758 "unmap": true, 00:10:26.758 "flush": true, 00:10:26.758 "reset": true, 00:10:26.758 "nvme_admin": false, 00:10:26.758 "nvme_io": false, 00:10:26.758 "nvme_io_md": false, 00:10:26.758 "write_zeroes": true, 00:10:26.758 "zcopy": true, 00:10:26.758 "get_zone_info": false, 00:10:26.758 "zone_management": false, 00:10:26.758 "zone_append": false, 00:10:26.758 "compare": false, 00:10:26.758 "compare_and_write": false, 00:10:26.758 "abort": true, 00:10:26.758 "seek_hole": false, 00:10:26.758 "seek_data": false, 00:10:26.758 "copy": true, 00:10:26.758 "nvme_iov_md": false 00:10:26.758 }, 00:10:26.758 "memory_domains": [ 00:10:26.758 { 00:10:26.758 "dma_device_id": "system", 00:10:26.758 "dma_device_type": 1 00:10:26.758 }, 00:10:26.758 { 00:10:26.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.758 "dma_device_type": 2 00:10:26.758 } 00:10:26.758 ], 00:10:26.758 "driver_specific": {} 00:10:26.758 } 00:10:26.758 ] 00:10:26.758 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.758 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:26.758 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:26.758 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.758 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.758 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.758 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.758 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.758 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.758 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.758 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.758 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.758 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.758 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.758 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.758 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.758 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.758 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.758 "name": "Existed_Raid", 00:10:26.758 "uuid": "fcacd288-4ee1-49cf-b2f9-3592f7a866cf", 00:10:26.758 "strip_size_kb": 0, 00:10:26.758 "state": "online", 00:10:26.758 "raid_level": "raid1", 00:10:26.758 "superblock": false, 00:10:26.758 "num_base_bdevs": 3, 00:10:26.758 "num_base_bdevs_discovered": 3, 00:10:26.758 "num_base_bdevs_operational": 3, 00:10:26.758 "base_bdevs_list": [ 00:10:26.758 { 00:10:26.758 "name": "NewBaseBdev", 00:10:26.758 "uuid": "d4da0c32-d16a-4d74-a544-c4d97469b842", 00:10:26.758 "is_configured": true, 00:10:26.758 "data_offset": 0, 00:10:26.758 "data_size": 65536 00:10:26.758 }, 00:10:26.758 { 00:10:26.758 "name": "BaseBdev2", 00:10:26.758 "uuid": "a2eeb176-fa08-4229-b29a-66845cda2fdb", 00:10:26.758 "is_configured": true, 00:10:26.758 "data_offset": 0, 00:10:26.758 "data_size": 65536 00:10:26.758 }, 00:10:26.758 { 00:10:26.758 "name": "BaseBdev3", 00:10:26.758 "uuid": "41fcb400-abc9-4c98-849f-50b1c396d760", 00:10:26.758 "is_configured": true, 00:10:26.758 "data_offset": 0, 00:10:26.758 "data_size": 65536 00:10:26.758 } 00:10:26.758 ] 00:10:26.758 }' 00:10:26.758 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.758 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.331 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:27.331 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:27.331 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:27.331 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:27.331 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:27.331 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:27.331 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:27.331 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:27.331 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.331 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.331 [2024-11-26 18:59:53.684027] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.331 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.331 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:27.331 "name": "Existed_Raid", 00:10:27.331 "aliases": [ 00:10:27.331 "fcacd288-4ee1-49cf-b2f9-3592f7a866cf" 00:10:27.331 ], 00:10:27.331 "product_name": "Raid Volume", 00:10:27.331 "block_size": 512, 00:10:27.331 "num_blocks": 65536, 00:10:27.331 "uuid": "fcacd288-4ee1-49cf-b2f9-3592f7a866cf", 00:10:27.331 "assigned_rate_limits": { 00:10:27.332 "rw_ios_per_sec": 0, 00:10:27.332 "rw_mbytes_per_sec": 0, 00:10:27.332 "r_mbytes_per_sec": 0, 00:10:27.332 "w_mbytes_per_sec": 0 00:10:27.332 }, 00:10:27.332 "claimed": false, 00:10:27.332 "zoned": false, 00:10:27.332 "supported_io_types": { 00:10:27.332 "read": true, 00:10:27.332 "write": true, 00:10:27.332 "unmap": false, 00:10:27.332 "flush": false, 00:10:27.332 "reset": true, 00:10:27.332 "nvme_admin": false, 00:10:27.332 "nvme_io": false, 00:10:27.332 "nvme_io_md": false, 00:10:27.332 "write_zeroes": true, 00:10:27.332 "zcopy": false, 00:10:27.332 "get_zone_info": false, 00:10:27.332 "zone_management": false, 00:10:27.332 "zone_append": false, 00:10:27.332 "compare": false, 00:10:27.332 "compare_and_write": false, 00:10:27.332 "abort": false, 00:10:27.332 "seek_hole": false, 00:10:27.332 "seek_data": false, 00:10:27.332 "copy": false, 00:10:27.332 "nvme_iov_md": false 00:10:27.332 }, 00:10:27.332 "memory_domains": [ 00:10:27.332 { 00:10:27.332 "dma_device_id": "system", 00:10:27.332 "dma_device_type": 1 00:10:27.332 }, 00:10:27.332 { 00:10:27.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.332 "dma_device_type": 2 00:10:27.332 }, 00:10:27.332 { 00:10:27.332 "dma_device_id": "system", 00:10:27.332 "dma_device_type": 1 00:10:27.332 }, 00:10:27.332 { 00:10:27.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.332 "dma_device_type": 2 00:10:27.332 }, 00:10:27.332 { 00:10:27.332 "dma_device_id": "system", 00:10:27.332 "dma_device_type": 1 00:10:27.332 }, 00:10:27.332 { 00:10:27.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.332 "dma_device_type": 2 00:10:27.332 } 00:10:27.332 ], 00:10:27.332 "driver_specific": { 00:10:27.332 "raid": { 00:10:27.332 "uuid": "fcacd288-4ee1-49cf-b2f9-3592f7a866cf", 00:10:27.332 "strip_size_kb": 0, 00:10:27.332 "state": "online", 00:10:27.332 "raid_level": "raid1", 00:10:27.332 "superblock": false, 00:10:27.332 "num_base_bdevs": 3, 00:10:27.332 "num_base_bdevs_discovered": 3, 00:10:27.332 "num_base_bdevs_operational": 3, 00:10:27.332 "base_bdevs_list": [ 00:10:27.332 { 00:10:27.332 "name": "NewBaseBdev", 00:10:27.332 "uuid": "d4da0c32-d16a-4d74-a544-c4d97469b842", 00:10:27.332 "is_configured": true, 00:10:27.332 "data_offset": 0, 00:10:27.332 "data_size": 65536 00:10:27.332 }, 00:10:27.332 { 00:10:27.332 "name": "BaseBdev2", 00:10:27.332 "uuid": "a2eeb176-fa08-4229-b29a-66845cda2fdb", 00:10:27.332 "is_configured": true, 00:10:27.332 "data_offset": 0, 00:10:27.332 "data_size": 65536 00:10:27.332 }, 00:10:27.332 { 00:10:27.332 "name": "BaseBdev3", 00:10:27.332 "uuid": "41fcb400-abc9-4c98-849f-50b1c396d760", 00:10:27.332 "is_configured": true, 00:10:27.332 "data_offset": 0, 00:10:27.332 "data_size": 65536 00:10:27.332 } 00:10:27.332 ] 00:10:27.332 } 00:10:27.332 } 00:10:27.332 }' 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:27.332 BaseBdev2 00:10:27.332 BaseBdev3' 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.332 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.591 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.591 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.591 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:27.591 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.591 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.591 [2024-11-26 18:59:53.991688] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:27.591 [2024-11-26 18:59:53.991741] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.591 [2024-11-26 18:59:53.991850] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.591 [2024-11-26 18:59:53.992337] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.591 [2024-11-26 18:59:53.992355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:27.591 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.591 18:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67796 00:10:27.591 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67796 ']' 00:10:27.591 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67796 00:10:27.591 18:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:27.591 18:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.591 18:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67796 00:10:27.591 killing process with pid 67796 00:10:27.591 18:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:27.591 18:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:27.591 18:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67796' 00:10:27.591 18:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67796 00:10:27.591 [2024-11-26 18:59:54.033450] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:27.591 18:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67796 00:10:27.849 [2024-11-26 18:59:54.329465] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:29.226 00:10:29.226 real 0m11.937s 00:10:29.226 user 0m19.527s 00:10:29.226 sys 0m1.737s 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.226 ************************************ 00:10:29.226 END TEST raid_state_function_test 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.226 ************************************ 00:10:29.226 18:59:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:29.226 18:59:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:29.226 18:59:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.226 18:59:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:29.226 ************************************ 00:10:29.226 START TEST raid_state_function_test_sb 00:10:29.226 ************************************ 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:29.226 Process raid pid: 68434 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68434 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68434' 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68434 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68434 ']' 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.226 18:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.226 [2024-11-26 18:59:55.710939] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:10:29.226 [2024-11-26 18:59:55.711136] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.485 [2024-11-26 18:59:55.898491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.485 [2024-11-26 18:59:56.045456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.744 [2024-11-26 18:59:56.276770] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.744 [2024-11-26 18:59:56.276833] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.313 [2024-11-26 18:59:56.749537] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.313 [2024-11-26 18:59:56.749738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.313 [2024-11-26 18:59:56.749768] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:30.313 [2024-11-26 18:59:56.749787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:30.313 [2024-11-26 18:59:56.749797] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:30.313 [2024-11-26 18:59:56.749812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.313 "name": "Existed_Raid", 00:10:30.313 "uuid": "fead124d-7963-4ca9-93a1-110c39599c73", 00:10:30.313 "strip_size_kb": 0, 00:10:30.313 "state": "configuring", 00:10:30.313 "raid_level": "raid1", 00:10:30.313 "superblock": true, 00:10:30.313 "num_base_bdevs": 3, 00:10:30.313 "num_base_bdevs_discovered": 0, 00:10:30.313 "num_base_bdevs_operational": 3, 00:10:30.313 "base_bdevs_list": [ 00:10:30.313 { 00:10:30.313 "name": "BaseBdev1", 00:10:30.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.313 "is_configured": false, 00:10:30.313 "data_offset": 0, 00:10:30.313 "data_size": 0 00:10:30.313 }, 00:10:30.313 { 00:10:30.313 "name": "BaseBdev2", 00:10:30.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.313 "is_configured": false, 00:10:30.313 "data_offset": 0, 00:10:30.313 "data_size": 0 00:10:30.313 }, 00:10:30.313 { 00:10:30.313 "name": "BaseBdev3", 00:10:30.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.313 "is_configured": false, 00:10:30.313 "data_offset": 0, 00:10:30.313 "data_size": 0 00:10:30.313 } 00:10:30.313 ] 00:10:30.313 }' 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.313 18:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.880 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:30.880 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.880 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.880 [2024-11-26 18:59:57.269612] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:30.880 [2024-11-26 18:59:57.269660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:30.880 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.880 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:30.880 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.880 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.880 [2024-11-26 18:59:57.277589] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.880 [2024-11-26 18:59:57.277643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.880 [2024-11-26 18:59:57.277659] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:30.880 [2024-11-26 18:59:57.277675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:30.880 [2024-11-26 18:59:57.277685] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:30.881 [2024-11-26 18:59:57.277699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.881 [2024-11-26 18:59:57.326559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.881 BaseBdev1 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.881 [ 00:10:30.881 { 00:10:30.881 "name": "BaseBdev1", 00:10:30.881 "aliases": [ 00:10:30.881 "9b1ecff8-0c2a-4c8f-bdcf-882f66944ab1" 00:10:30.881 ], 00:10:30.881 "product_name": "Malloc disk", 00:10:30.881 "block_size": 512, 00:10:30.881 "num_blocks": 65536, 00:10:30.881 "uuid": "9b1ecff8-0c2a-4c8f-bdcf-882f66944ab1", 00:10:30.881 "assigned_rate_limits": { 00:10:30.881 "rw_ios_per_sec": 0, 00:10:30.881 "rw_mbytes_per_sec": 0, 00:10:30.881 "r_mbytes_per_sec": 0, 00:10:30.881 "w_mbytes_per_sec": 0 00:10:30.881 }, 00:10:30.881 "claimed": true, 00:10:30.881 "claim_type": "exclusive_write", 00:10:30.881 "zoned": false, 00:10:30.881 "supported_io_types": { 00:10:30.881 "read": true, 00:10:30.881 "write": true, 00:10:30.881 "unmap": true, 00:10:30.881 "flush": true, 00:10:30.881 "reset": true, 00:10:30.881 "nvme_admin": false, 00:10:30.881 "nvme_io": false, 00:10:30.881 "nvme_io_md": false, 00:10:30.881 "write_zeroes": true, 00:10:30.881 "zcopy": true, 00:10:30.881 "get_zone_info": false, 00:10:30.881 "zone_management": false, 00:10:30.881 "zone_append": false, 00:10:30.881 "compare": false, 00:10:30.881 "compare_and_write": false, 00:10:30.881 "abort": true, 00:10:30.881 "seek_hole": false, 00:10:30.881 "seek_data": false, 00:10:30.881 "copy": true, 00:10:30.881 "nvme_iov_md": false 00:10:30.881 }, 00:10:30.881 "memory_domains": [ 00:10:30.881 { 00:10:30.881 "dma_device_id": "system", 00:10:30.881 "dma_device_type": 1 00:10:30.881 }, 00:10:30.881 { 00:10:30.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.881 "dma_device_type": 2 00:10:30.881 } 00:10:30.881 ], 00:10:30.881 "driver_specific": {} 00:10:30.881 } 00:10:30.881 ] 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.881 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.881 "name": "Existed_Raid", 00:10:30.881 "uuid": "a10a89c7-71e6-4f99-a888-edad06e14458", 00:10:30.881 "strip_size_kb": 0, 00:10:30.881 "state": "configuring", 00:10:30.881 "raid_level": "raid1", 00:10:30.881 "superblock": true, 00:10:30.881 "num_base_bdevs": 3, 00:10:30.881 "num_base_bdevs_discovered": 1, 00:10:30.882 "num_base_bdevs_operational": 3, 00:10:30.882 "base_bdevs_list": [ 00:10:30.882 { 00:10:30.882 "name": "BaseBdev1", 00:10:30.882 "uuid": "9b1ecff8-0c2a-4c8f-bdcf-882f66944ab1", 00:10:30.882 "is_configured": true, 00:10:30.882 "data_offset": 2048, 00:10:30.882 "data_size": 63488 00:10:30.882 }, 00:10:30.882 { 00:10:30.882 "name": "BaseBdev2", 00:10:30.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.882 "is_configured": false, 00:10:30.882 "data_offset": 0, 00:10:30.882 "data_size": 0 00:10:30.882 }, 00:10:30.882 { 00:10:30.882 "name": "BaseBdev3", 00:10:30.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.882 "is_configured": false, 00:10:30.882 "data_offset": 0, 00:10:30.882 "data_size": 0 00:10:30.882 } 00:10:30.882 ] 00:10:30.882 }' 00:10:30.882 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.882 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.449 [2024-11-26 18:59:57.870766] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:31.449 [2024-11-26 18:59:57.870838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.449 [2024-11-26 18:59:57.882822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.449 [2024-11-26 18:59:57.885726] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.449 [2024-11-26 18:59:57.885899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.449 [2024-11-26 18:59:57.886040] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:31.449 [2024-11-26 18:59:57.886101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.449 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.449 "name": "Existed_Raid", 00:10:31.449 "uuid": "69612c38-7983-4a23-a5a6-3767f7b76cc3", 00:10:31.449 "strip_size_kb": 0, 00:10:31.449 "state": "configuring", 00:10:31.449 "raid_level": "raid1", 00:10:31.449 "superblock": true, 00:10:31.449 "num_base_bdevs": 3, 00:10:31.449 "num_base_bdevs_discovered": 1, 00:10:31.449 "num_base_bdevs_operational": 3, 00:10:31.449 "base_bdevs_list": [ 00:10:31.449 { 00:10:31.449 "name": "BaseBdev1", 00:10:31.450 "uuid": "9b1ecff8-0c2a-4c8f-bdcf-882f66944ab1", 00:10:31.450 "is_configured": true, 00:10:31.450 "data_offset": 2048, 00:10:31.450 "data_size": 63488 00:10:31.450 }, 00:10:31.450 { 00:10:31.450 "name": "BaseBdev2", 00:10:31.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.450 "is_configured": false, 00:10:31.450 "data_offset": 0, 00:10:31.450 "data_size": 0 00:10:31.450 }, 00:10:31.450 { 00:10:31.450 "name": "BaseBdev3", 00:10:31.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.450 "is_configured": false, 00:10:31.450 "data_offset": 0, 00:10:31.450 "data_size": 0 00:10:31.450 } 00:10:31.450 ] 00:10:31.450 }' 00:10:31.450 18:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.450 18:59:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.037 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:32.037 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.037 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.037 [2024-11-26 18:59:58.438971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:32.037 BaseBdev2 00:10:32.037 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.037 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:32.037 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:32.037 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:32.037 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:32.037 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:32.037 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:32.037 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:32.037 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.037 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.037 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.037 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:32.037 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.037 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.037 [ 00:10:32.037 { 00:10:32.037 "name": "BaseBdev2", 00:10:32.037 "aliases": [ 00:10:32.037 "1a32e62d-bf20-49df-bbee-7f6b2c9c4a3a" 00:10:32.037 ], 00:10:32.037 "product_name": "Malloc disk", 00:10:32.037 "block_size": 512, 00:10:32.037 "num_blocks": 65536, 00:10:32.037 "uuid": "1a32e62d-bf20-49df-bbee-7f6b2c9c4a3a", 00:10:32.037 "assigned_rate_limits": { 00:10:32.037 "rw_ios_per_sec": 0, 00:10:32.037 "rw_mbytes_per_sec": 0, 00:10:32.037 "r_mbytes_per_sec": 0, 00:10:32.037 "w_mbytes_per_sec": 0 00:10:32.037 }, 00:10:32.037 "claimed": true, 00:10:32.037 "claim_type": "exclusive_write", 00:10:32.037 "zoned": false, 00:10:32.037 "supported_io_types": { 00:10:32.037 "read": true, 00:10:32.037 "write": true, 00:10:32.037 "unmap": true, 00:10:32.037 "flush": true, 00:10:32.037 "reset": true, 00:10:32.037 "nvme_admin": false, 00:10:32.037 "nvme_io": false, 00:10:32.037 "nvme_io_md": false, 00:10:32.037 "write_zeroes": true, 00:10:32.037 "zcopy": true, 00:10:32.037 "get_zone_info": false, 00:10:32.037 "zone_management": false, 00:10:32.037 "zone_append": false, 00:10:32.037 "compare": false, 00:10:32.037 "compare_and_write": false, 00:10:32.037 "abort": true, 00:10:32.037 "seek_hole": false, 00:10:32.037 "seek_data": false, 00:10:32.037 "copy": true, 00:10:32.037 "nvme_iov_md": false 00:10:32.037 }, 00:10:32.037 "memory_domains": [ 00:10:32.037 { 00:10:32.037 "dma_device_id": "system", 00:10:32.037 "dma_device_type": 1 00:10:32.037 }, 00:10:32.037 { 00:10:32.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.037 "dma_device_type": 2 00:10:32.037 } 00:10:32.037 ], 00:10:32.037 "driver_specific": {} 00:10:32.038 } 00:10:32.038 ] 00:10:32.038 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.038 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:32.038 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:32.038 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:32.038 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:32.038 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.038 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.038 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.038 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.038 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.038 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.038 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.038 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.038 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.038 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.038 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.038 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.038 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.038 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.038 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.038 "name": "Existed_Raid", 00:10:32.038 "uuid": "69612c38-7983-4a23-a5a6-3767f7b76cc3", 00:10:32.038 "strip_size_kb": 0, 00:10:32.038 "state": "configuring", 00:10:32.038 "raid_level": "raid1", 00:10:32.038 "superblock": true, 00:10:32.038 "num_base_bdevs": 3, 00:10:32.038 "num_base_bdevs_discovered": 2, 00:10:32.038 "num_base_bdevs_operational": 3, 00:10:32.038 "base_bdevs_list": [ 00:10:32.038 { 00:10:32.038 "name": "BaseBdev1", 00:10:32.038 "uuid": "9b1ecff8-0c2a-4c8f-bdcf-882f66944ab1", 00:10:32.038 "is_configured": true, 00:10:32.038 "data_offset": 2048, 00:10:32.038 "data_size": 63488 00:10:32.038 }, 00:10:32.038 { 00:10:32.038 "name": "BaseBdev2", 00:10:32.038 "uuid": "1a32e62d-bf20-49df-bbee-7f6b2c9c4a3a", 00:10:32.038 "is_configured": true, 00:10:32.038 "data_offset": 2048, 00:10:32.038 "data_size": 63488 00:10:32.038 }, 00:10:32.038 { 00:10:32.038 "name": "BaseBdev3", 00:10:32.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.038 "is_configured": false, 00:10:32.038 "data_offset": 0, 00:10:32.038 "data_size": 0 00:10:32.038 } 00:10:32.038 ] 00:10:32.038 }' 00:10:32.038 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.038 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.608 18:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:32.608 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.608 18:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.608 [2024-11-26 18:59:59.052104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:32.608 [2024-11-26 18:59:59.052473] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:32.608 [2024-11-26 18:59:59.052504] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:32.608 BaseBdev3 00:10:32.608 [2024-11-26 18:59:59.052855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:32.608 [2024-11-26 18:59:59.053097] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:32.608 [2024-11-26 18:59:59.053121] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:32.608 [2024-11-26 18:59:59.053335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.608 [ 00:10:32.608 { 00:10:32.608 "name": "BaseBdev3", 00:10:32.608 "aliases": [ 00:10:32.608 "984a887f-36b7-4088-a361-3aa63dd80882" 00:10:32.608 ], 00:10:32.608 "product_name": "Malloc disk", 00:10:32.608 "block_size": 512, 00:10:32.608 "num_blocks": 65536, 00:10:32.608 "uuid": "984a887f-36b7-4088-a361-3aa63dd80882", 00:10:32.608 "assigned_rate_limits": { 00:10:32.608 "rw_ios_per_sec": 0, 00:10:32.608 "rw_mbytes_per_sec": 0, 00:10:32.608 "r_mbytes_per_sec": 0, 00:10:32.608 "w_mbytes_per_sec": 0 00:10:32.608 }, 00:10:32.608 "claimed": true, 00:10:32.608 "claim_type": "exclusive_write", 00:10:32.608 "zoned": false, 00:10:32.608 "supported_io_types": { 00:10:32.608 "read": true, 00:10:32.608 "write": true, 00:10:32.608 "unmap": true, 00:10:32.608 "flush": true, 00:10:32.608 "reset": true, 00:10:32.608 "nvme_admin": false, 00:10:32.608 "nvme_io": false, 00:10:32.608 "nvme_io_md": false, 00:10:32.608 "write_zeroes": true, 00:10:32.608 "zcopy": true, 00:10:32.608 "get_zone_info": false, 00:10:32.608 "zone_management": false, 00:10:32.608 "zone_append": false, 00:10:32.608 "compare": false, 00:10:32.608 "compare_and_write": false, 00:10:32.608 "abort": true, 00:10:32.608 "seek_hole": false, 00:10:32.608 "seek_data": false, 00:10:32.608 "copy": true, 00:10:32.608 "nvme_iov_md": false 00:10:32.608 }, 00:10:32.608 "memory_domains": [ 00:10:32.608 { 00:10:32.608 "dma_device_id": "system", 00:10:32.608 "dma_device_type": 1 00:10:32.608 }, 00:10:32.608 { 00:10:32.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.608 "dma_device_type": 2 00:10:32.608 } 00:10:32.608 ], 00:10:32.608 "driver_specific": {} 00:10:32.608 } 00:10:32.608 ] 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.608 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.608 "name": "Existed_Raid", 00:10:32.608 "uuid": "69612c38-7983-4a23-a5a6-3767f7b76cc3", 00:10:32.608 "strip_size_kb": 0, 00:10:32.608 "state": "online", 00:10:32.608 "raid_level": "raid1", 00:10:32.608 "superblock": true, 00:10:32.608 "num_base_bdevs": 3, 00:10:32.608 "num_base_bdevs_discovered": 3, 00:10:32.608 "num_base_bdevs_operational": 3, 00:10:32.608 "base_bdevs_list": [ 00:10:32.608 { 00:10:32.608 "name": "BaseBdev1", 00:10:32.608 "uuid": "9b1ecff8-0c2a-4c8f-bdcf-882f66944ab1", 00:10:32.608 "is_configured": true, 00:10:32.608 "data_offset": 2048, 00:10:32.608 "data_size": 63488 00:10:32.608 }, 00:10:32.608 { 00:10:32.608 "name": "BaseBdev2", 00:10:32.608 "uuid": "1a32e62d-bf20-49df-bbee-7f6b2c9c4a3a", 00:10:32.608 "is_configured": true, 00:10:32.608 "data_offset": 2048, 00:10:32.608 "data_size": 63488 00:10:32.608 }, 00:10:32.608 { 00:10:32.608 "name": "BaseBdev3", 00:10:32.608 "uuid": "984a887f-36b7-4088-a361-3aa63dd80882", 00:10:32.608 "is_configured": true, 00:10:32.608 "data_offset": 2048, 00:10:32.609 "data_size": 63488 00:10:32.609 } 00:10:32.609 ] 00:10:32.609 }' 00:10:32.609 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.609 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.175 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:33.175 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:33.175 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:33.175 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:33.175 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:33.175 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:33.175 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:33.175 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:33.175 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.175 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.175 [2024-11-26 18:59:59.616722] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:33.175 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.175 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:33.175 "name": "Existed_Raid", 00:10:33.175 "aliases": [ 00:10:33.175 "69612c38-7983-4a23-a5a6-3767f7b76cc3" 00:10:33.175 ], 00:10:33.175 "product_name": "Raid Volume", 00:10:33.175 "block_size": 512, 00:10:33.175 "num_blocks": 63488, 00:10:33.175 "uuid": "69612c38-7983-4a23-a5a6-3767f7b76cc3", 00:10:33.175 "assigned_rate_limits": { 00:10:33.175 "rw_ios_per_sec": 0, 00:10:33.175 "rw_mbytes_per_sec": 0, 00:10:33.175 "r_mbytes_per_sec": 0, 00:10:33.175 "w_mbytes_per_sec": 0 00:10:33.175 }, 00:10:33.175 "claimed": false, 00:10:33.175 "zoned": false, 00:10:33.175 "supported_io_types": { 00:10:33.175 "read": true, 00:10:33.175 "write": true, 00:10:33.175 "unmap": false, 00:10:33.175 "flush": false, 00:10:33.175 "reset": true, 00:10:33.175 "nvme_admin": false, 00:10:33.175 "nvme_io": false, 00:10:33.175 "nvme_io_md": false, 00:10:33.175 "write_zeroes": true, 00:10:33.175 "zcopy": false, 00:10:33.175 "get_zone_info": false, 00:10:33.175 "zone_management": false, 00:10:33.175 "zone_append": false, 00:10:33.175 "compare": false, 00:10:33.175 "compare_and_write": false, 00:10:33.175 "abort": false, 00:10:33.175 "seek_hole": false, 00:10:33.175 "seek_data": false, 00:10:33.175 "copy": false, 00:10:33.175 "nvme_iov_md": false 00:10:33.175 }, 00:10:33.175 "memory_domains": [ 00:10:33.175 { 00:10:33.175 "dma_device_id": "system", 00:10:33.175 "dma_device_type": 1 00:10:33.175 }, 00:10:33.175 { 00:10:33.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.175 "dma_device_type": 2 00:10:33.175 }, 00:10:33.175 { 00:10:33.175 "dma_device_id": "system", 00:10:33.175 "dma_device_type": 1 00:10:33.175 }, 00:10:33.175 { 00:10:33.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.175 "dma_device_type": 2 00:10:33.175 }, 00:10:33.175 { 00:10:33.175 "dma_device_id": "system", 00:10:33.175 "dma_device_type": 1 00:10:33.175 }, 00:10:33.175 { 00:10:33.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.175 "dma_device_type": 2 00:10:33.175 } 00:10:33.175 ], 00:10:33.175 "driver_specific": { 00:10:33.175 "raid": { 00:10:33.175 "uuid": "69612c38-7983-4a23-a5a6-3767f7b76cc3", 00:10:33.175 "strip_size_kb": 0, 00:10:33.175 "state": "online", 00:10:33.175 "raid_level": "raid1", 00:10:33.175 "superblock": true, 00:10:33.175 "num_base_bdevs": 3, 00:10:33.175 "num_base_bdevs_discovered": 3, 00:10:33.175 "num_base_bdevs_operational": 3, 00:10:33.175 "base_bdevs_list": [ 00:10:33.175 { 00:10:33.175 "name": "BaseBdev1", 00:10:33.175 "uuid": "9b1ecff8-0c2a-4c8f-bdcf-882f66944ab1", 00:10:33.175 "is_configured": true, 00:10:33.175 "data_offset": 2048, 00:10:33.175 "data_size": 63488 00:10:33.175 }, 00:10:33.175 { 00:10:33.175 "name": "BaseBdev2", 00:10:33.175 "uuid": "1a32e62d-bf20-49df-bbee-7f6b2c9c4a3a", 00:10:33.175 "is_configured": true, 00:10:33.175 "data_offset": 2048, 00:10:33.175 "data_size": 63488 00:10:33.175 }, 00:10:33.175 { 00:10:33.175 "name": "BaseBdev3", 00:10:33.175 "uuid": "984a887f-36b7-4088-a361-3aa63dd80882", 00:10:33.175 "is_configured": true, 00:10:33.176 "data_offset": 2048, 00:10:33.176 "data_size": 63488 00:10:33.176 } 00:10:33.176 ] 00:10:33.176 } 00:10:33.176 } 00:10:33.176 }' 00:10:33.176 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:33.176 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:33.176 BaseBdev2 00:10:33.176 BaseBdev3' 00:10:33.176 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.176 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:33.176 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.176 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:33.176 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.176 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.176 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.176 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.435 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.435 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.435 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.435 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:33.435 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.435 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.435 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.435 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.435 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.435 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.435 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.435 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:33.435 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.435 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.435 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.435 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.435 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.435 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.435 18:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:33.435 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.435 18:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.435 [2024-11-26 18:59:59.932456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:33.435 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.435 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:33.435 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:33.435 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:33.435 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:33.435 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:33.435 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:33.435 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.435 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.435 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.435 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.435 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:33.435 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.435 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.435 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.435 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.435 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.435 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.435 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.435 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.435 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.694 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.694 "name": "Existed_Raid", 00:10:33.694 "uuid": "69612c38-7983-4a23-a5a6-3767f7b76cc3", 00:10:33.694 "strip_size_kb": 0, 00:10:33.694 "state": "online", 00:10:33.694 "raid_level": "raid1", 00:10:33.694 "superblock": true, 00:10:33.694 "num_base_bdevs": 3, 00:10:33.694 "num_base_bdevs_discovered": 2, 00:10:33.694 "num_base_bdevs_operational": 2, 00:10:33.694 "base_bdevs_list": [ 00:10:33.694 { 00:10:33.694 "name": null, 00:10:33.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.694 "is_configured": false, 00:10:33.694 "data_offset": 0, 00:10:33.694 "data_size": 63488 00:10:33.694 }, 00:10:33.694 { 00:10:33.694 "name": "BaseBdev2", 00:10:33.694 "uuid": "1a32e62d-bf20-49df-bbee-7f6b2c9c4a3a", 00:10:33.694 "is_configured": true, 00:10:33.694 "data_offset": 2048, 00:10:33.694 "data_size": 63488 00:10:33.694 }, 00:10:33.694 { 00:10:33.694 "name": "BaseBdev3", 00:10:33.694 "uuid": "984a887f-36b7-4088-a361-3aa63dd80882", 00:10:33.694 "is_configured": true, 00:10:33.694 "data_offset": 2048, 00:10:33.694 "data_size": 63488 00:10:33.694 } 00:10:33.694 ] 00:10:33.694 }' 00:10:33.694 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.694 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.952 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:33.952 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:33.952 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.952 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:33.952 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.952 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.952 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.211 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:34.211 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:34.211 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:34.211 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.211 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.211 [2024-11-26 19:00:00.605470] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:34.211 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.211 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:34.211 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:34.211 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:34.211 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.211 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.211 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.211 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.211 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:34.211 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:34.211 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:34.211 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.211 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.211 [2024-11-26 19:00:00.753270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:34.211 [2024-11-26 19:00:00.753446] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.470 [2024-11-26 19:00:00.841968] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.470 [2024-11-26 19:00:00.842247] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:34.470 [2024-11-26 19:00:00.842304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:34.470 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.470 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:34.470 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:34.470 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.470 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.470 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.470 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:34.470 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.470 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:34.470 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:34.470 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:34.470 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:34.470 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.471 BaseBdev2 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.471 [ 00:10:34.471 { 00:10:34.471 "name": "BaseBdev2", 00:10:34.471 "aliases": [ 00:10:34.471 "88cbc6e9-b3e0-40de-bd8b-feaa1dffc14c" 00:10:34.471 ], 00:10:34.471 "product_name": "Malloc disk", 00:10:34.471 "block_size": 512, 00:10:34.471 "num_blocks": 65536, 00:10:34.471 "uuid": "88cbc6e9-b3e0-40de-bd8b-feaa1dffc14c", 00:10:34.471 "assigned_rate_limits": { 00:10:34.471 "rw_ios_per_sec": 0, 00:10:34.471 "rw_mbytes_per_sec": 0, 00:10:34.471 "r_mbytes_per_sec": 0, 00:10:34.471 "w_mbytes_per_sec": 0 00:10:34.471 }, 00:10:34.471 "claimed": false, 00:10:34.471 "zoned": false, 00:10:34.471 "supported_io_types": { 00:10:34.471 "read": true, 00:10:34.471 "write": true, 00:10:34.471 "unmap": true, 00:10:34.471 "flush": true, 00:10:34.471 "reset": true, 00:10:34.471 "nvme_admin": false, 00:10:34.471 "nvme_io": false, 00:10:34.471 "nvme_io_md": false, 00:10:34.471 "write_zeroes": true, 00:10:34.471 "zcopy": true, 00:10:34.471 "get_zone_info": false, 00:10:34.471 "zone_management": false, 00:10:34.471 "zone_append": false, 00:10:34.471 "compare": false, 00:10:34.471 "compare_and_write": false, 00:10:34.471 "abort": true, 00:10:34.471 "seek_hole": false, 00:10:34.471 "seek_data": false, 00:10:34.471 "copy": true, 00:10:34.471 "nvme_iov_md": false 00:10:34.471 }, 00:10:34.471 "memory_domains": [ 00:10:34.471 { 00:10:34.471 "dma_device_id": "system", 00:10:34.471 "dma_device_type": 1 00:10:34.471 }, 00:10:34.471 { 00:10:34.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.471 "dma_device_type": 2 00:10:34.471 } 00:10:34.471 ], 00:10:34.471 "driver_specific": {} 00:10:34.471 } 00:10:34.471 ] 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.471 19:00:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.471 BaseBdev3 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.471 [ 00:10:34.471 { 00:10:34.471 "name": "BaseBdev3", 00:10:34.471 "aliases": [ 00:10:34.471 "adfff686-c24d-40e9-ac4b-8073c6916bcb" 00:10:34.471 ], 00:10:34.471 "product_name": "Malloc disk", 00:10:34.471 "block_size": 512, 00:10:34.471 "num_blocks": 65536, 00:10:34.471 "uuid": "adfff686-c24d-40e9-ac4b-8073c6916bcb", 00:10:34.471 "assigned_rate_limits": { 00:10:34.471 "rw_ios_per_sec": 0, 00:10:34.471 "rw_mbytes_per_sec": 0, 00:10:34.471 "r_mbytes_per_sec": 0, 00:10:34.471 "w_mbytes_per_sec": 0 00:10:34.471 }, 00:10:34.471 "claimed": false, 00:10:34.471 "zoned": false, 00:10:34.471 "supported_io_types": { 00:10:34.471 "read": true, 00:10:34.471 "write": true, 00:10:34.471 "unmap": true, 00:10:34.471 "flush": true, 00:10:34.471 "reset": true, 00:10:34.471 "nvme_admin": false, 00:10:34.471 "nvme_io": false, 00:10:34.471 "nvme_io_md": false, 00:10:34.471 "write_zeroes": true, 00:10:34.471 "zcopy": true, 00:10:34.471 "get_zone_info": false, 00:10:34.471 "zone_management": false, 00:10:34.471 "zone_append": false, 00:10:34.471 "compare": false, 00:10:34.471 "compare_and_write": false, 00:10:34.471 "abort": true, 00:10:34.471 "seek_hole": false, 00:10:34.471 "seek_data": false, 00:10:34.471 "copy": true, 00:10:34.471 "nvme_iov_md": false 00:10:34.471 }, 00:10:34.471 "memory_domains": [ 00:10:34.471 { 00:10:34.471 "dma_device_id": "system", 00:10:34.471 "dma_device_type": 1 00:10:34.471 }, 00:10:34.471 { 00:10:34.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.471 "dma_device_type": 2 00:10:34.471 } 00:10:34.471 ], 00:10:34.471 "driver_specific": {} 00:10:34.471 } 00:10:34.471 ] 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.471 [2024-11-26 19:00:01.065081] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:34.471 [2024-11-26 19:00:01.065267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:34.471 [2024-11-26 19:00:01.065444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.471 [2024-11-26 19:00:01.068316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.471 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.472 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.472 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.472 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.730 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.730 "name": "Existed_Raid", 00:10:34.730 "uuid": "aa2dd5e9-c019-4d8b-a8b8-14bd81ddc62a", 00:10:34.730 "strip_size_kb": 0, 00:10:34.730 "state": "configuring", 00:10:34.730 "raid_level": "raid1", 00:10:34.730 "superblock": true, 00:10:34.730 "num_base_bdevs": 3, 00:10:34.730 "num_base_bdevs_discovered": 2, 00:10:34.730 "num_base_bdevs_operational": 3, 00:10:34.730 "base_bdevs_list": [ 00:10:34.730 { 00:10:34.730 "name": "BaseBdev1", 00:10:34.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.730 "is_configured": false, 00:10:34.730 "data_offset": 0, 00:10:34.730 "data_size": 0 00:10:34.730 }, 00:10:34.730 { 00:10:34.730 "name": "BaseBdev2", 00:10:34.730 "uuid": "88cbc6e9-b3e0-40de-bd8b-feaa1dffc14c", 00:10:34.730 "is_configured": true, 00:10:34.730 "data_offset": 2048, 00:10:34.730 "data_size": 63488 00:10:34.730 }, 00:10:34.730 { 00:10:34.730 "name": "BaseBdev3", 00:10:34.730 "uuid": "adfff686-c24d-40e9-ac4b-8073c6916bcb", 00:10:34.730 "is_configured": true, 00:10:34.730 "data_offset": 2048, 00:10:34.730 "data_size": 63488 00:10:34.730 } 00:10:34.730 ] 00:10:34.730 }' 00:10:34.730 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.730 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.007 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:35.007 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.007 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.282 [2024-11-26 19:00:01.617290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:35.282 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.282 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:35.282 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.282 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.282 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.282 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.282 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.282 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.282 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.282 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.282 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.282 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.282 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.282 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.282 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.282 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.282 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.282 "name": "Existed_Raid", 00:10:35.282 "uuid": "aa2dd5e9-c019-4d8b-a8b8-14bd81ddc62a", 00:10:35.282 "strip_size_kb": 0, 00:10:35.282 "state": "configuring", 00:10:35.282 "raid_level": "raid1", 00:10:35.282 "superblock": true, 00:10:35.282 "num_base_bdevs": 3, 00:10:35.282 "num_base_bdevs_discovered": 1, 00:10:35.282 "num_base_bdevs_operational": 3, 00:10:35.282 "base_bdevs_list": [ 00:10:35.282 { 00:10:35.282 "name": "BaseBdev1", 00:10:35.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.282 "is_configured": false, 00:10:35.282 "data_offset": 0, 00:10:35.282 "data_size": 0 00:10:35.282 }, 00:10:35.282 { 00:10:35.282 "name": null, 00:10:35.282 "uuid": "88cbc6e9-b3e0-40de-bd8b-feaa1dffc14c", 00:10:35.282 "is_configured": false, 00:10:35.282 "data_offset": 0, 00:10:35.282 "data_size": 63488 00:10:35.282 }, 00:10:35.282 { 00:10:35.282 "name": "BaseBdev3", 00:10:35.282 "uuid": "adfff686-c24d-40e9-ac4b-8073c6916bcb", 00:10:35.282 "is_configured": true, 00:10:35.282 "data_offset": 2048, 00:10:35.282 "data_size": 63488 00:10:35.282 } 00:10:35.282 ] 00:10:35.282 }' 00:10:35.282 19:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.282 19:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.541 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.541 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.541 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:35.541 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.799 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.799 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:35.799 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:35.799 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.799 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.799 [2024-11-26 19:00:02.243908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.799 BaseBdev1 00:10:35.799 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.799 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:35.799 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:35.799 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.799 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:35.799 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.799 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.799 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.799 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.799 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.799 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.799 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:35.799 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.799 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.799 [ 00:10:35.799 { 00:10:35.799 "name": "BaseBdev1", 00:10:35.799 "aliases": [ 00:10:35.799 "37ddb919-4c85-4c26-9e9e-93c5613b2504" 00:10:35.799 ], 00:10:35.799 "product_name": "Malloc disk", 00:10:35.799 "block_size": 512, 00:10:35.800 "num_blocks": 65536, 00:10:35.800 "uuid": "37ddb919-4c85-4c26-9e9e-93c5613b2504", 00:10:35.800 "assigned_rate_limits": { 00:10:35.800 "rw_ios_per_sec": 0, 00:10:35.800 "rw_mbytes_per_sec": 0, 00:10:35.800 "r_mbytes_per_sec": 0, 00:10:35.800 "w_mbytes_per_sec": 0 00:10:35.800 }, 00:10:35.800 "claimed": true, 00:10:35.800 "claim_type": "exclusive_write", 00:10:35.800 "zoned": false, 00:10:35.800 "supported_io_types": { 00:10:35.800 "read": true, 00:10:35.800 "write": true, 00:10:35.800 "unmap": true, 00:10:35.800 "flush": true, 00:10:35.800 "reset": true, 00:10:35.800 "nvme_admin": false, 00:10:35.800 "nvme_io": false, 00:10:35.800 "nvme_io_md": false, 00:10:35.800 "write_zeroes": true, 00:10:35.800 "zcopy": true, 00:10:35.800 "get_zone_info": false, 00:10:35.800 "zone_management": false, 00:10:35.800 "zone_append": false, 00:10:35.800 "compare": false, 00:10:35.800 "compare_and_write": false, 00:10:35.800 "abort": true, 00:10:35.800 "seek_hole": false, 00:10:35.800 "seek_data": false, 00:10:35.800 "copy": true, 00:10:35.800 "nvme_iov_md": false 00:10:35.800 }, 00:10:35.800 "memory_domains": [ 00:10:35.800 { 00:10:35.800 "dma_device_id": "system", 00:10:35.800 "dma_device_type": 1 00:10:35.800 }, 00:10:35.800 { 00:10:35.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.800 "dma_device_type": 2 00:10:35.800 } 00:10:35.800 ], 00:10:35.800 "driver_specific": {} 00:10:35.800 } 00:10:35.800 ] 00:10:35.800 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.800 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:35.800 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:35.800 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.800 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.800 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.800 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.800 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.800 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.800 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.800 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.800 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.800 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.800 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.800 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.800 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.800 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.800 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.800 "name": "Existed_Raid", 00:10:35.800 "uuid": "aa2dd5e9-c019-4d8b-a8b8-14bd81ddc62a", 00:10:35.800 "strip_size_kb": 0, 00:10:35.800 "state": "configuring", 00:10:35.800 "raid_level": "raid1", 00:10:35.800 "superblock": true, 00:10:35.800 "num_base_bdevs": 3, 00:10:35.800 "num_base_bdevs_discovered": 2, 00:10:35.800 "num_base_bdevs_operational": 3, 00:10:35.800 "base_bdevs_list": [ 00:10:35.800 { 00:10:35.800 "name": "BaseBdev1", 00:10:35.800 "uuid": "37ddb919-4c85-4c26-9e9e-93c5613b2504", 00:10:35.800 "is_configured": true, 00:10:35.800 "data_offset": 2048, 00:10:35.800 "data_size": 63488 00:10:35.800 }, 00:10:35.800 { 00:10:35.800 "name": null, 00:10:35.800 "uuid": "88cbc6e9-b3e0-40de-bd8b-feaa1dffc14c", 00:10:35.800 "is_configured": false, 00:10:35.800 "data_offset": 0, 00:10:35.800 "data_size": 63488 00:10:35.800 }, 00:10:35.800 { 00:10:35.800 "name": "BaseBdev3", 00:10:35.800 "uuid": "adfff686-c24d-40e9-ac4b-8073c6916bcb", 00:10:35.800 "is_configured": true, 00:10:35.800 "data_offset": 2048, 00:10:35.800 "data_size": 63488 00:10:35.800 } 00:10:35.800 ] 00:10:35.800 }' 00:10:35.800 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.800 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.367 [2024-11-26 19:00:02.860148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.367 "name": "Existed_Raid", 00:10:36.367 "uuid": "aa2dd5e9-c019-4d8b-a8b8-14bd81ddc62a", 00:10:36.367 "strip_size_kb": 0, 00:10:36.367 "state": "configuring", 00:10:36.367 "raid_level": "raid1", 00:10:36.367 "superblock": true, 00:10:36.367 "num_base_bdevs": 3, 00:10:36.367 "num_base_bdevs_discovered": 1, 00:10:36.367 "num_base_bdevs_operational": 3, 00:10:36.367 "base_bdevs_list": [ 00:10:36.367 { 00:10:36.367 "name": "BaseBdev1", 00:10:36.367 "uuid": "37ddb919-4c85-4c26-9e9e-93c5613b2504", 00:10:36.367 "is_configured": true, 00:10:36.367 "data_offset": 2048, 00:10:36.367 "data_size": 63488 00:10:36.367 }, 00:10:36.367 { 00:10:36.367 "name": null, 00:10:36.367 "uuid": "88cbc6e9-b3e0-40de-bd8b-feaa1dffc14c", 00:10:36.367 "is_configured": false, 00:10:36.367 "data_offset": 0, 00:10:36.367 "data_size": 63488 00:10:36.367 }, 00:10:36.367 { 00:10:36.367 "name": null, 00:10:36.367 "uuid": "adfff686-c24d-40e9-ac4b-8073c6916bcb", 00:10:36.367 "is_configured": false, 00:10:36.367 "data_offset": 0, 00:10:36.367 "data_size": 63488 00:10:36.367 } 00:10:36.367 ] 00:10:36.367 }' 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.367 19:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.933 [2024-11-26 19:00:03.460417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.933 "name": "Existed_Raid", 00:10:36.933 "uuid": "aa2dd5e9-c019-4d8b-a8b8-14bd81ddc62a", 00:10:36.933 "strip_size_kb": 0, 00:10:36.933 "state": "configuring", 00:10:36.933 "raid_level": "raid1", 00:10:36.933 "superblock": true, 00:10:36.933 "num_base_bdevs": 3, 00:10:36.933 "num_base_bdevs_discovered": 2, 00:10:36.933 "num_base_bdevs_operational": 3, 00:10:36.933 "base_bdevs_list": [ 00:10:36.933 { 00:10:36.933 "name": "BaseBdev1", 00:10:36.933 "uuid": "37ddb919-4c85-4c26-9e9e-93c5613b2504", 00:10:36.933 "is_configured": true, 00:10:36.933 "data_offset": 2048, 00:10:36.933 "data_size": 63488 00:10:36.933 }, 00:10:36.933 { 00:10:36.933 "name": null, 00:10:36.933 "uuid": "88cbc6e9-b3e0-40de-bd8b-feaa1dffc14c", 00:10:36.933 "is_configured": false, 00:10:36.933 "data_offset": 0, 00:10:36.933 "data_size": 63488 00:10:36.933 }, 00:10:36.933 { 00:10:36.933 "name": "BaseBdev3", 00:10:36.933 "uuid": "adfff686-c24d-40e9-ac4b-8073c6916bcb", 00:10:36.933 "is_configured": true, 00:10:36.933 "data_offset": 2048, 00:10:36.933 "data_size": 63488 00:10:36.933 } 00:10:36.933 ] 00:10:36.933 }' 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.933 19:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.499 19:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.499 19:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:37.499 19:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.499 19:00:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.499 19:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.499 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:37.499 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:37.499 19:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.499 19:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.499 [2024-11-26 19:00:04.056584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:37.758 19:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.758 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:37.758 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.758 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.758 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.758 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.758 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.758 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.758 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.758 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.758 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.758 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.758 19:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.758 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.758 19:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.758 19:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.758 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.758 "name": "Existed_Raid", 00:10:37.758 "uuid": "aa2dd5e9-c019-4d8b-a8b8-14bd81ddc62a", 00:10:37.758 "strip_size_kb": 0, 00:10:37.758 "state": "configuring", 00:10:37.758 "raid_level": "raid1", 00:10:37.758 "superblock": true, 00:10:37.758 "num_base_bdevs": 3, 00:10:37.758 "num_base_bdevs_discovered": 1, 00:10:37.758 "num_base_bdevs_operational": 3, 00:10:37.758 "base_bdevs_list": [ 00:10:37.758 { 00:10:37.758 "name": null, 00:10:37.758 "uuid": "37ddb919-4c85-4c26-9e9e-93c5613b2504", 00:10:37.758 "is_configured": false, 00:10:37.758 "data_offset": 0, 00:10:37.758 "data_size": 63488 00:10:37.758 }, 00:10:37.758 { 00:10:37.758 "name": null, 00:10:37.758 "uuid": "88cbc6e9-b3e0-40de-bd8b-feaa1dffc14c", 00:10:37.758 "is_configured": false, 00:10:37.758 "data_offset": 0, 00:10:37.758 "data_size": 63488 00:10:37.758 }, 00:10:37.758 { 00:10:37.758 "name": "BaseBdev3", 00:10:37.758 "uuid": "adfff686-c24d-40e9-ac4b-8073c6916bcb", 00:10:37.758 "is_configured": true, 00:10:37.758 "data_offset": 2048, 00:10:37.758 "data_size": 63488 00:10:37.758 } 00:10:37.758 ] 00:10:37.758 }' 00:10:37.758 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.758 19:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.325 [2024-11-26 19:00:04.736588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.325 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.325 "name": "Existed_Raid", 00:10:38.325 "uuid": "aa2dd5e9-c019-4d8b-a8b8-14bd81ddc62a", 00:10:38.325 "strip_size_kb": 0, 00:10:38.325 "state": "configuring", 00:10:38.325 "raid_level": "raid1", 00:10:38.325 "superblock": true, 00:10:38.325 "num_base_bdevs": 3, 00:10:38.325 "num_base_bdevs_discovered": 2, 00:10:38.325 "num_base_bdevs_operational": 3, 00:10:38.325 "base_bdevs_list": [ 00:10:38.325 { 00:10:38.325 "name": null, 00:10:38.325 "uuid": "37ddb919-4c85-4c26-9e9e-93c5613b2504", 00:10:38.325 "is_configured": false, 00:10:38.325 "data_offset": 0, 00:10:38.325 "data_size": 63488 00:10:38.325 }, 00:10:38.325 { 00:10:38.325 "name": "BaseBdev2", 00:10:38.325 "uuid": "88cbc6e9-b3e0-40de-bd8b-feaa1dffc14c", 00:10:38.325 "is_configured": true, 00:10:38.325 "data_offset": 2048, 00:10:38.325 "data_size": 63488 00:10:38.325 }, 00:10:38.325 { 00:10:38.325 "name": "BaseBdev3", 00:10:38.326 "uuid": "adfff686-c24d-40e9-ac4b-8073c6916bcb", 00:10:38.326 "is_configured": true, 00:10:38.326 "data_offset": 2048, 00:10:38.326 "data_size": 63488 00:10:38.326 } 00:10:38.326 ] 00:10:38.326 }' 00:10:38.326 19:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.326 19:00:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 37ddb919-4c85-4c26-9e9e-93c5613b2504 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.893 [2024-11-26 19:00:05.425413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:38.893 [2024-11-26 19:00:05.425952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:38.893 [2024-11-26 19:00:05.425977] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:38.893 NewBaseBdev 00:10:38.893 [2024-11-26 19:00:05.426341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:38.893 [2024-11-26 19:00:05.426565] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:38.893 [2024-11-26 19:00:05.426589] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:38.893 [2024-11-26 19:00:05.426760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.893 [ 00:10:38.893 { 00:10:38.893 "name": "NewBaseBdev", 00:10:38.893 "aliases": [ 00:10:38.893 "37ddb919-4c85-4c26-9e9e-93c5613b2504" 00:10:38.893 ], 00:10:38.893 "product_name": "Malloc disk", 00:10:38.893 "block_size": 512, 00:10:38.893 "num_blocks": 65536, 00:10:38.893 "uuid": "37ddb919-4c85-4c26-9e9e-93c5613b2504", 00:10:38.893 "assigned_rate_limits": { 00:10:38.893 "rw_ios_per_sec": 0, 00:10:38.893 "rw_mbytes_per_sec": 0, 00:10:38.893 "r_mbytes_per_sec": 0, 00:10:38.893 "w_mbytes_per_sec": 0 00:10:38.893 }, 00:10:38.893 "claimed": true, 00:10:38.893 "claim_type": "exclusive_write", 00:10:38.893 "zoned": false, 00:10:38.893 "supported_io_types": { 00:10:38.893 "read": true, 00:10:38.893 "write": true, 00:10:38.893 "unmap": true, 00:10:38.893 "flush": true, 00:10:38.893 "reset": true, 00:10:38.893 "nvme_admin": false, 00:10:38.893 "nvme_io": false, 00:10:38.893 "nvme_io_md": false, 00:10:38.893 "write_zeroes": true, 00:10:38.893 "zcopy": true, 00:10:38.893 "get_zone_info": false, 00:10:38.893 "zone_management": false, 00:10:38.893 "zone_append": false, 00:10:38.893 "compare": false, 00:10:38.893 "compare_and_write": false, 00:10:38.893 "abort": true, 00:10:38.893 "seek_hole": false, 00:10:38.893 "seek_data": false, 00:10:38.893 "copy": true, 00:10:38.893 "nvme_iov_md": false 00:10:38.893 }, 00:10:38.893 "memory_domains": [ 00:10:38.893 { 00:10:38.893 "dma_device_id": "system", 00:10:38.893 "dma_device_type": 1 00:10:38.893 }, 00:10:38.893 { 00:10:38.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.893 "dma_device_type": 2 00:10:38.893 } 00:10:38.893 ], 00:10:38.893 "driver_specific": {} 00:10:38.893 } 00:10:38.893 ] 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.893 "name": "Existed_Raid", 00:10:38.893 "uuid": "aa2dd5e9-c019-4d8b-a8b8-14bd81ddc62a", 00:10:38.893 "strip_size_kb": 0, 00:10:38.893 "state": "online", 00:10:38.893 "raid_level": "raid1", 00:10:38.893 "superblock": true, 00:10:38.893 "num_base_bdevs": 3, 00:10:38.893 "num_base_bdevs_discovered": 3, 00:10:38.893 "num_base_bdevs_operational": 3, 00:10:38.893 "base_bdevs_list": [ 00:10:38.893 { 00:10:38.893 "name": "NewBaseBdev", 00:10:38.893 "uuid": "37ddb919-4c85-4c26-9e9e-93c5613b2504", 00:10:38.893 "is_configured": true, 00:10:38.893 "data_offset": 2048, 00:10:38.893 "data_size": 63488 00:10:38.893 }, 00:10:38.893 { 00:10:38.893 "name": "BaseBdev2", 00:10:38.893 "uuid": "88cbc6e9-b3e0-40de-bd8b-feaa1dffc14c", 00:10:38.893 "is_configured": true, 00:10:38.893 "data_offset": 2048, 00:10:38.893 "data_size": 63488 00:10:38.893 }, 00:10:38.893 { 00:10:38.893 "name": "BaseBdev3", 00:10:38.893 "uuid": "adfff686-c24d-40e9-ac4b-8073c6916bcb", 00:10:38.893 "is_configured": true, 00:10:38.893 "data_offset": 2048, 00:10:38.893 "data_size": 63488 00:10:38.893 } 00:10:38.893 ] 00:10:38.893 }' 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.893 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.460 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:39.460 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:39.460 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:39.460 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:39.460 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:39.460 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:39.460 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:39.460 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.460 19:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:39.460 19:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.460 [2024-11-26 19:00:05.994004] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.460 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.460 19:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:39.460 "name": "Existed_Raid", 00:10:39.460 "aliases": [ 00:10:39.460 "aa2dd5e9-c019-4d8b-a8b8-14bd81ddc62a" 00:10:39.460 ], 00:10:39.460 "product_name": "Raid Volume", 00:10:39.460 "block_size": 512, 00:10:39.460 "num_blocks": 63488, 00:10:39.460 "uuid": "aa2dd5e9-c019-4d8b-a8b8-14bd81ddc62a", 00:10:39.460 "assigned_rate_limits": { 00:10:39.460 "rw_ios_per_sec": 0, 00:10:39.460 "rw_mbytes_per_sec": 0, 00:10:39.460 "r_mbytes_per_sec": 0, 00:10:39.460 "w_mbytes_per_sec": 0 00:10:39.460 }, 00:10:39.460 "claimed": false, 00:10:39.460 "zoned": false, 00:10:39.460 "supported_io_types": { 00:10:39.460 "read": true, 00:10:39.460 "write": true, 00:10:39.460 "unmap": false, 00:10:39.460 "flush": false, 00:10:39.460 "reset": true, 00:10:39.460 "nvme_admin": false, 00:10:39.460 "nvme_io": false, 00:10:39.460 "nvme_io_md": false, 00:10:39.460 "write_zeroes": true, 00:10:39.460 "zcopy": false, 00:10:39.460 "get_zone_info": false, 00:10:39.460 "zone_management": false, 00:10:39.460 "zone_append": false, 00:10:39.460 "compare": false, 00:10:39.460 "compare_and_write": false, 00:10:39.460 "abort": false, 00:10:39.460 "seek_hole": false, 00:10:39.460 "seek_data": false, 00:10:39.460 "copy": false, 00:10:39.460 "nvme_iov_md": false 00:10:39.460 }, 00:10:39.460 "memory_domains": [ 00:10:39.460 { 00:10:39.460 "dma_device_id": "system", 00:10:39.460 "dma_device_type": 1 00:10:39.460 }, 00:10:39.460 { 00:10:39.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.460 "dma_device_type": 2 00:10:39.460 }, 00:10:39.460 { 00:10:39.460 "dma_device_id": "system", 00:10:39.460 "dma_device_type": 1 00:10:39.460 }, 00:10:39.460 { 00:10:39.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.460 "dma_device_type": 2 00:10:39.460 }, 00:10:39.460 { 00:10:39.460 "dma_device_id": "system", 00:10:39.460 "dma_device_type": 1 00:10:39.460 }, 00:10:39.460 { 00:10:39.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.460 "dma_device_type": 2 00:10:39.460 } 00:10:39.460 ], 00:10:39.460 "driver_specific": { 00:10:39.460 "raid": { 00:10:39.460 "uuid": "aa2dd5e9-c019-4d8b-a8b8-14bd81ddc62a", 00:10:39.460 "strip_size_kb": 0, 00:10:39.460 "state": "online", 00:10:39.460 "raid_level": "raid1", 00:10:39.460 "superblock": true, 00:10:39.460 "num_base_bdevs": 3, 00:10:39.460 "num_base_bdevs_discovered": 3, 00:10:39.460 "num_base_bdevs_operational": 3, 00:10:39.460 "base_bdevs_list": [ 00:10:39.460 { 00:10:39.460 "name": "NewBaseBdev", 00:10:39.460 "uuid": "37ddb919-4c85-4c26-9e9e-93c5613b2504", 00:10:39.460 "is_configured": true, 00:10:39.460 "data_offset": 2048, 00:10:39.460 "data_size": 63488 00:10:39.460 }, 00:10:39.460 { 00:10:39.460 "name": "BaseBdev2", 00:10:39.460 "uuid": "88cbc6e9-b3e0-40de-bd8b-feaa1dffc14c", 00:10:39.460 "is_configured": true, 00:10:39.460 "data_offset": 2048, 00:10:39.460 "data_size": 63488 00:10:39.460 }, 00:10:39.460 { 00:10:39.460 "name": "BaseBdev3", 00:10:39.460 "uuid": "adfff686-c24d-40e9-ac4b-8073c6916bcb", 00:10:39.460 "is_configured": true, 00:10:39.460 "data_offset": 2048, 00:10:39.460 "data_size": 63488 00:10:39.460 } 00:10:39.460 ] 00:10:39.460 } 00:10:39.460 } 00:10:39.460 }' 00:10:39.460 19:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:39.719 BaseBdev2 00:10:39.719 BaseBdev3' 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.719 [2024-11-26 19:00:06.325732] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:39.719 [2024-11-26 19:00:06.325784] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.719 [2024-11-26 19:00:06.325899] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.719 [2024-11-26 19:00:06.326378] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:39.719 [2024-11-26 19:00:06.326407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68434 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68434 ']' 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68434 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.719 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68434 00:10:39.978 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:39.978 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:39.978 killing process with pid 68434 00:10:39.978 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68434' 00:10:39.978 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68434 00:10:39.978 [2024-11-26 19:00:06.365110] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:39.978 19:00:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68434 00:10:40.236 [2024-11-26 19:00:06.685989] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:41.614 19:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:41.614 00:10:41.614 real 0m12.334s 00:10:41.614 user 0m20.264s 00:10:41.614 sys 0m1.756s 00:10:41.614 19:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.614 19:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.614 ************************************ 00:10:41.614 END TEST raid_state_function_test_sb 00:10:41.614 ************************************ 00:10:41.614 19:00:07 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:41.614 19:00:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:41.614 19:00:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.614 19:00:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:41.614 ************************************ 00:10:41.614 START TEST raid_superblock_test 00:10:41.614 ************************************ 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=69071 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 69071 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 69071 ']' 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.614 19:00:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.614 [2024-11-26 19:00:08.104557] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:10:41.614 [2024-11-26 19:00:08.104737] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69071 ] 00:10:41.896 [2024-11-26 19:00:08.295259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.896 [2024-11-26 19:00:08.480342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.192 [2024-11-26 19:00:08.731107] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.192 [2024-11-26 19:00:08.731171] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.765 malloc1 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.765 [2024-11-26 19:00:09.182597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:42.765 [2024-11-26 19:00:09.182681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.765 [2024-11-26 19:00:09.182714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:42.765 [2024-11-26 19:00:09.182729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.765 [2024-11-26 19:00:09.185865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.765 [2024-11-26 19:00:09.185910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:42.765 pt1 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.765 malloc2 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.765 [2024-11-26 19:00:09.243725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:42.765 [2024-11-26 19:00:09.243794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.765 [2024-11-26 19:00:09.243834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:42.765 [2024-11-26 19:00:09.243856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.765 [2024-11-26 19:00:09.246900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.765 [2024-11-26 19:00:09.246938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:42.765 pt2 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.765 malloc3 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.765 [2024-11-26 19:00:09.313388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:42.765 [2024-11-26 19:00:09.313456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.765 [2024-11-26 19:00:09.313490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:42.765 [2024-11-26 19:00:09.313506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.765 [2024-11-26 19:00:09.316505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.765 [2024-11-26 19:00:09.316549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:42.765 pt3 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.765 [2024-11-26 19:00:09.321475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:42.765 [2024-11-26 19:00:09.324162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:42.765 [2024-11-26 19:00:09.324269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:42.765 [2024-11-26 19:00:09.324502] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:42.765 [2024-11-26 19:00:09.324540] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:42.765 [2024-11-26 19:00:09.324844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:42.765 [2024-11-26 19:00:09.325097] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:42.765 [2024-11-26 19:00:09.325126] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:42.765 [2024-11-26 19:00:09.325376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.765 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.766 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:42.766 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:42.766 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.766 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.766 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.766 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.766 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.766 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.766 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.766 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.766 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.766 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.766 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.766 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.766 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.026 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.026 "name": "raid_bdev1", 00:10:43.026 "uuid": "a5c51df7-e81d-4ac0-bfc6-bf3f90c31cc6", 00:10:43.026 "strip_size_kb": 0, 00:10:43.026 "state": "online", 00:10:43.026 "raid_level": "raid1", 00:10:43.026 "superblock": true, 00:10:43.026 "num_base_bdevs": 3, 00:10:43.026 "num_base_bdevs_discovered": 3, 00:10:43.026 "num_base_bdevs_operational": 3, 00:10:43.026 "base_bdevs_list": [ 00:10:43.026 { 00:10:43.026 "name": "pt1", 00:10:43.026 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:43.026 "is_configured": true, 00:10:43.026 "data_offset": 2048, 00:10:43.026 "data_size": 63488 00:10:43.026 }, 00:10:43.026 { 00:10:43.026 "name": "pt2", 00:10:43.026 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.026 "is_configured": true, 00:10:43.026 "data_offset": 2048, 00:10:43.026 "data_size": 63488 00:10:43.026 }, 00:10:43.026 { 00:10:43.026 "name": "pt3", 00:10:43.026 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:43.026 "is_configured": true, 00:10:43.026 "data_offset": 2048, 00:10:43.026 "data_size": 63488 00:10:43.026 } 00:10:43.026 ] 00:10:43.026 }' 00:10:43.026 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.026 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.285 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:43.285 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:43.285 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:43.285 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:43.285 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:43.285 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:43.285 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:43.285 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:43.285 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.285 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.285 [2024-11-26 19:00:09.851156] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.285 19:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.285 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:43.285 "name": "raid_bdev1", 00:10:43.285 "aliases": [ 00:10:43.285 "a5c51df7-e81d-4ac0-bfc6-bf3f90c31cc6" 00:10:43.285 ], 00:10:43.285 "product_name": "Raid Volume", 00:10:43.285 "block_size": 512, 00:10:43.285 "num_blocks": 63488, 00:10:43.285 "uuid": "a5c51df7-e81d-4ac0-bfc6-bf3f90c31cc6", 00:10:43.285 "assigned_rate_limits": { 00:10:43.285 "rw_ios_per_sec": 0, 00:10:43.285 "rw_mbytes_per_sec": 0, 00:10:43.285 "r_mbytes_per_sec": 0, 00:10:43.285 "w_mbytes_per_sec": 0 00:10:43.285 }, 00:10:43.285 "claimed": false, 00:10:43.285 "zoned": false, 00:10:43.285 "supported_io_types": { 00:10:43.285 "read": true, 00:10:43.285 "write": true, 00:10:43.285 "unmap": false, 00:10:43.285 "flush": false, 00:10:43.285 "reset": true, 00:10:43.285 "nvme_admin": false, 00:10:43.285 "nvme_io": false, 00:10:43.285 "nvme_io_md": false, 00:10:43.285 "write_zeroes": true, 00:10:43.285 "zcopy": false, 00:10:43.285 "get_zone_info": false, 00:10:43.285 "zone_management": false, 00:10:43.285 "zone_append": false, 00:10:43.285 "compare": false, 00:10:43.285 "compare_and_write": false, 00:10:43.285 "abort": false, 00:10:43.285 "seek_hole": false, 00:10:43.285 "seek_data": false, 00:10:43.285 "copy": false, 00:10:43.285 "nvme_iov_md": false 00:10:43.285 }, 00:10:43.285 "memory_domains": [ 00:10:43.285 { 00:10:43.285 "dma_device_id": "system", 00:10:43.285 "dma_device_type": 1 00:10:43.285 }, 00:10:43.285 { 00:10:43.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.285 "dma_device_type": 2 00:10:43.285 }, 00:10:43.285 { 00:10:43.285 "dma_device_id": "system", 00:10:43.285 "dma_device_type": 1 00:10:43.285 }, 00:10:43.285 { 00:10:43.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.285 "dma_device_type": 2 00:10:43.285 }, 00:10:43.285 { 00:10:43.285 "dma_device_id": "system", 00:10:43.285 "dma_device_type": 1 00:10:43.285 }, 00:10:43.285 { 00:10:43.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.286 "dma_device_type": 2 00:10:43.286 } 00:10:43.286 ], 00:10:43.286 "driver_specific": { 00:10:43.286 "raid": { 00:10:43.286 "uuid": "a5c51df7-e81d-4ac0-bfc6-bf3f90c31cc6", 00:10:43.286 "strip_size_kb": 0, 00:10:43.286 "state": "online", 00:10:43.286 "raid_level": "raid1", 00:10:43.286 "superblock": true, 00:10:43.286 "num_base_bdevs": 3, 00:10:43.286 "num_base_bdevs_discovered": 3, 00:10:43.286 "num_base_bdevs_operational": 3, 00:10:43.286 "base_bdevs_list": [ 00:10:43.286 { 00:10:43.286 "name": "pt1", 00:10:43.286 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:43.286 "is_configured": true, 00:10:43.286 "data_offset": 2048, 00:10:43.286 "data_size": 63488 00:10:43.286 }, 00:10:43.286 { 00:10:43.286 "name": "pt2", 00:10:43.286 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.286 "is_configured": true, 00:10:43.286 "data_offset": 2048, 00:10:43.286 "data_size": 63488 00:10:43.286 }, 00:10:43.286 { 00:10:43.286 "name": "pt3", 00:10:43.286 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:43.286 "is_configured": true, 00:10:43.286 "data_offset": 2048, 00:10:43.286 "data_size": 63488 00:10:43.286 } 00:10:43.286 ] 00:10:43.286 } 00:10:43.286 } 00:10:43.286 }' 00:10:43.286 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:43.544 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:43.544 pt2 00:10:43.544 pt3' 00:10:43.544 19:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.544 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.803 [2024-11-26 19:00:10.179071] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a5c51df7-e81d-4ac0-bfc6-bf3f90c31cc6 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a5c51df7-e81d-4ac0-bfc6-bf3f90c31cc6 ']' 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.803 [2024-11-26 19:00:10.226740] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:43.803 [2024-11-26 19:00:10.226778] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.803 [2024-11-26 19:00:10.226904] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.803 [2024-11-26 19:00:10.227009] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:43.803 [2024-11-26 19:00:10.227025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:43.803 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.804 [2024-11-26 19:00:10.358933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:43.804 [2024-11-26 19:00:10.361638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:43.804 [2024-11-26 19:00:10.361728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:43.804 [2024-11-26 19:00:10.361815] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:43.804 [2024-11-26 19:00:10.361900] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:43.804 [2024-11-26 19:00:10.361936] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:43.804 [2024-11-26 19:00:10.361964] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:43.804 [2024-11-26 19:00:10.361979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:43.804 request: 00:10:43.804 { 00:10:43.804 "name": "raid_bdev1", 00:10:43.804 "raid_level": "raid1", 00:10:43.804 "base_bdevs": [ 00:10:43.804 "malloc1", 00:10:43.804 "malloc2", 00:10:43.804 "malloc3" 00:10:43.804 ], 00:10:43.804 "superblock": false, 00:10:43.804 "method": "bdev_raid_create", 00:10:43.804 "req_id": 1 00:10:43.804 } 00:10:43.804 Got JSON-RPC error response 00:10:43.804 response: 00:10:43.804 { 00:10:43.804 "code": -17, 00:10:43.804 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:43.804 } 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.804 [2024-11-26 19:00:10.414916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:43.804 [2024-11-26 19:00:10.415002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.804 [2024-11-26 19:00:10.415034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:43.804 [2024-11-26 19:00:10.415050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.804 [2024-11-26 19:00:10.418176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.804 [2024-11-26 19:00:10.418219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:43.804 [2024-11-26 19:00:10.418362] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:43.804 [2024-11-26 19:00:10.418478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:43.804 pt1 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.804 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.062 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.062 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.062 "name": "raid_bdev1", 00:10:44.062 "uuid": "a5c51df7-e81d-4ac0-bfc6-bf3f90c31cc6", 00:10:44.062 "strip_size_kb": 0, 00:10:44.062 "state": "configuring", 00:10:44.062 "raid_level": "raid1", 00:10:44.062 "superblock": true, 00:10:44.062 "num_base_bdevs": 3, 00:10:44.062 "num_base_bdevs_discovered": 1, 00:10:44.062 "num_base_bdevs_operational": 3, 00:10:44.062 "base_bdevs_list": [ 00:10:44.062 { 00:10:44.062 "name": "pt1", 00:10:44.062 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.062 "is_configured": true, 00:10:44.062 "data_offset": 2048, 00:10:44.062 "data_size": 63488 00:10:44.062 }, 00:10:44.062 { 00:10:44.062 "name": null, 00:10:44.062 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.062 "is_configured": false, 00:10:44.062 "data_offset": 2048, 00:10:44.062 "data_size": 63488 00:10:44.062 }, 00:10:44.062 { 00:10:44.062 "name": null, 00:10:44.062 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:44.062 "is_configured": false, 00:10:44.062 "data_offset": 2048, 00:10:44.062 "data_size": 63488 00:10:44.062 } 00:10:44.062 ] 00:10:44.062 }' 00:10:44.062 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.062 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.630 [2024-11-26 19:00:10.963154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:44.630 [2024-11-26 19:00:10.963241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.630 [2024-11-26 19:00:10.963303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:44.630 [2024-11-26 19:00:10.963322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.630 [2024-11-26 19:00:10.963992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.630 [2024-11-26 19:00:10.964036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:44.630 [2024-11-26 19:00:10.964173] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:44.630 [2024-11-26 19:00:10.964217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:44.630 pt2 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.630 [2024-11-26 19:00:10.971132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.630 19:00:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.630 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.630 "name": "raid_bdev1", 00:10:44.630 "uuid": "a5c51df7-e81d-4ac0-bfc6-bf3f90c31cc6", 00:10:44.630 "strip_size_kb": 0, 00:10:44.630 "state": "configuring", 00:10:44.630 "raid_level": "raid1", 00:10:44.630 "superblock": true, 00:10:44.630 "num_base_bdevs": 3, 00:10:44.630 "num_base_bdevs_discovered": 1, 00:10:44.630 "num_base_bdevs_operational": 3, 00:10:44.630 "base_bdevs_list": [ 00:10:44.630 { 00:10:44.630 "name": "pt1", 00:10:44.630 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.630 "is_configured": true, 00:10:44.630 "data_offset": 2048, 00:10:44.630 "data_size": 63488 00:10:44.630 }, 00:10:44.630 { 00:10:44.630 "name": null, 00:10:44.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.630 "is_configured": false, 00:10:44.630 "data_offset": 0, 00:10:44.630 "data_size": 63488 00:10:44.630 }, 00:10:44.630 { 00:10:44.630 "name": null, 00:10:44.630 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:44.630 "is_configured": false, 00:10:44.630 "data_offset": 2048, 00:10:44.630 "data_size": 63488 00:10:44.630 } 00:10:44.630 ] 00:10:44.630 }' 00:10:44.630 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.630 19:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.197 [2024-11-26 19:00:11.535420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:45.197 [2024-11-26 19:00:11.535524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.197 [2024-11-26 19:00:11.535557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:45.197 [2024-11-26 19:00:11.535575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.197 [2024-11-26 19:00:11.536253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.197 [2024-11-26 19:00:11.536318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:45.197 [2024-11-26 19:00:11.536440] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:45.197 [2024-11-26 19:00:11.536494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:45.197 pt2 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.197 [2024-11-26 19:00:11.543392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:45.197 [2024-11-26 19:00:11.543455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.197 [2024-11-26 19:00:11.543479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:45.197 [2024-11-26 19:00:11.543496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.197 [2024-11-26 19:00:11.544134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.197 [2024-11-26 19:00:11.544189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:45.197 [2024-11-26 19:00:11.544319] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:45.197 [2024-11-26 19:00:11.544371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:45.197 [2024-11-26 19:00:11.544554] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:45.197 [2024-11-26 19:00:11.544587] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:45.197 [2024-11-26 19:00:11.544930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:45.197 [2024-11-26 19:00:11.545197] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:45.197 [2024-11-26 19:00:11.545223] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:45.197 [2024-11-26 19:00:11.545437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.197 pt3 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.197 "name": "raid_bdev1", 00:10:45.197 "uuid": "a5c51df7-e81d-4ac0-bfc6-bf3f90c31cc6", 00:10:45.197 "strip_size_kb": 0, 00:10:45.197 "state": "online", 00:10:45.197 "raid_level": "raid1", 00:10:45.197 "superblock": true, 00:10:45.197 "num_base_bdevs": 3, 00:10:45.197 "num_base_bdevs_discovered": 3, 00:10:45.197 "num_base_bdevs_operational": 3, 00:10:45.197 "base_bdevs_list": [ 00:10:45.197 { 00:10:45.197 "name": "pt1", 00:10:45.197 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.197 "is_configured": true, 00:10:45.197 "data_offset": 2048, 00:10:45.197 "data_size": 63488 00:10:45.197 }, 00:10:45.197 { 00:10:45.197 "name": "pt2", 00:10:45.197 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.197 "is_configured": true, 00:10:45.197 "data_offset": 2048, 00:10:45.197 "data_size": 63488 00:10:45.197 }, 00:10:45.197 { 00:10:45.197 "name": "pt3", 00:10:45.197 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.197 "is_configured": true, 00:10:45.197 "data_offset": 2048, 00:10:45.197 "data_size": 63488 00:10:45.197 } 00:10:45.197 ] 00:10:45.197 }' 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.197 19:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.763 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:45.763 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:45.763 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:45.763 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:45.763 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:45.763 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:45.763 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:45.763 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:45.763 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.763 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.763 [2024-11-26 19:00:12.087975] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.763 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.763 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:45.763 "name": "raid_bdev1", 00:10:45.763 "aliases": [ 00:10:45.763 "a5c51df7-e81d-4ac0-bfc6-bf3f90c31cc6" 00:10:45.763 ], 00:10:45.763 "product_name": "Raid Volume", 00:10:45.763 "block_size": 512, 00:10:45.763 "num_blocks": 63488, 00:10:45.763 "uuid": "a5c51df7-e81d-4ac0-bfc6-bf3f90c31cc6", 00:10:45.763 "assigned_rate_limits": { 00:10:45.763 "rw_ios_per_sec": 0, 00:10:45.763 "rw_mbytes_per_sec": 0, 00:10:45.763 "r_mbytes_per_sec": 0, 00:10:45.763 "w_mbytes_per_sec": 0 00:10:45.764 }, 00:10:45.764 "claimed": false, 00:10:45.764 "zoned": false, 00:10:45.764 "supported_io_types": { 00:10:45.764 "read": true, 00:10:45.764 "write": true, 00:10:45.764 "unmap": false, 00:10:45.764 "flush": false, 00:10:45.764 "reset": true, 00:10:45.764 "nvme_admin": false, 00:10:45.764 "nvme_io": false, 00:10:45.764 "nvme_io_md": false, 00:10:45.764 "write_zeroes": true, 00:10:45.764 "zcopy": false, 00:10:45.764 "get_zone_info": false, 00:10:45.764 "zone_management": false, 00:10:45.764 "zone_append": false, 00:10:45.764 "compare": false, 00:10:45.764 "compare_and_write": false, 00:10:45.764 "abort": false, 00:10:45.764 "seek_hole": false, 00:10:45.764 "seek_data": false, 00:10:45.764 "copy": false, 00:10:45.764 "nvme_iov_md": false 00:10:45.764 }, 00:10:45.764 "memory_domains": [ 00:10:45.764 { 00:10:45.764 "dma_device_id": "system", 00:10:45.764 "dma_device_type": 1 00:10:45.764 }, 00:10:45.764 { 00:10:45.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.764 "dma_device_type": 2 00:10:45.764 }, 00:10:45.764 { 00:10:45.764 "dma_device_id": "system", 00:10:45.764 "dma_device_type": 1 00:10:45.764 }, 00:10:45.764 { 00:10:45.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.764 "dma_device_type": 2 00:10:45.764 }, 00:10:45.764 { 00:10:45.764 "dma_device_id": "system", 00:10:45.764 "dma_device_type": 1 00:10:45.764 }, 00:10:45.764 { 00:10:45.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.764 "dma_device_type": 2 00:10:45.764 } 00:10:45.764 ], 00:10:45.764 "driver_specific": { 00:10:45.764 "raid": { 00:10:45.764 "uuid": "a5c51df7-e81d-4ac0-bfc6-bf3f90c31cc6", 00:10:45.764 "strip_size_kb": 0, 00:10:45.764 "state": "online", 00:10:45.764 "raid_level": "raid1", 00:10:45.764 "superblock": true, 00:10:45.764 "num_base_bdevs": 3, 00:10:45.764 "num_base_bdevs_discovered": 3, 00:10:45.764 "num_base_bdevs_operational": 3, 00:10:45.764 "base_bdevs_list": [ 00:10:45.764 { 00:10:45.764 "name": "pt1", 00:10:45.764 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.764 "is_configured": true, 00:10:45.764 "data_offset": 2048, 00:10:45.764 "data_size": 63488 00:10:45.764 }, 00:10:45.764 { 00:10:45.764 "name": "pt2", 00:10:45.764 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.764 "is_configured": true, 00:10:45.764 "data_offset": 2048, 00:10:45.764 "data_size": 63488 00:10:45.764 }, 00:10:45.764 { 00:10:45.764 "name": "pt3", 00:10:45.764 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.764 "is_configured": true, 00:10:45.764 "data_offset": 2048, 00:10:45.764 "data_size": 63488 00:10:45.764 } 00:10:45.764 ] 00:10:45.764 } 00:10:45.764 } 00:10:45.764 }' 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:45.764 pt2 00:10:45.764 pt3' 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.764 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.023 [2024-11-26 19:00:12.416081] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a5c51df7-e81d-4ac0-bfc6-bf3f90c31cc6 '!=' a5c51df7-e81d-4ac0-bfc6-bf3f90c31cc6 ']' 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.023 [2024-11-26 19:00:12.459705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.023 "name": "raid_bdev1", 00:10:46.023 "uuid": "a5c51df7-e81d-4ac0-bfc6-bf3f90c31cc6", 00:10:46.023 "strip_size_kb": 0, 00:10:46.023 "state": "online", 00:10:46.023 "raid_level": "raid1", 00:10:46.023 "superblock": true, 00:10:46.023 "num_base_bdevs": 3, 00:10:46.023 "num_base_bdevs_discovered": 2, 00:10:46.023 "num_base_bdevs_operational": 2, 00:10:46.023 "base_bdevs_list": [ 00:10:46.023 { 00:10:46.023 "name": null, 00:10:46.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.023 "is_configured": false, 00:10:46.023 "data_offset": 0, 00:10:46.023 "data_size": 63488 00:10:46.023 }, 00:10:46.023 { 00:10:46.023 "name": "pt2", 00:10:46.023 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.023 "is_configured": true, 00:10:46.023 "data_offset": 2048, 00:10:46.023 "data_size": 63488 00:10:46.023 }, 00:10:46.023 { 00:10:46.023 "name": "pt3", 00:10:46.023 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.023 "is_configured": true, 00:10:46.023 "data_offset": 2048, 00:10:46.023 "data_size": 63488 00:10:46.023 } 00:10:46.023 ] 00:10:46.023 }' 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.023 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.592 19:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:46.592 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.592 19:00:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.592 [2024-11-26 19:00:12.999895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:46.592 [2024-11-26 19:00:12.999940] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.592 [2024-11-26 19:00:13.000051] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.592 [2024-11-26 19:00:13.000152] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.592 [2024-11-26 19:00:13.000174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.592 [2024-11-26 19:00:13.087904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:46.592 [2024-11-26 19:00:13.087993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.592 [2024-11-26 19:00:13.088025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:46.592 [2024-11-26 19:00:13.088044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.592 [2024-11-26 19:00:13.091747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.592 [2024-11-26 19:00:13.091805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:46.592 [2024-11-26 19:00:13.091942] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:46.592 [2024-11-26 19:00:13.092041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:46.592 pt2 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.592 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.593 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.593 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.593 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:46.593 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.593 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.593 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.593 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.593 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.593 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.593 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.593 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.593 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.593 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.593 "name": "raid_bdev1", 00:10:46.593 "uuid": "a5c51df7-e81d-4ac0-bfc6-bf3f90c31cc6", 00:10:46.593 "strip_size_kb": 0, 00:10:46.593 "state": "configuring", 00:10:46.593 "raid_level": "raid1", 00:10:46.593 "superblock": true, 00:10:46.593 "num_base_bdevs": 3, 00:10:46.593 "num_base_bdevs_discovered": 1, 00:10:46.593 "num_base_bdevs_operational": 2, 00:10:46.593 "base_bdevs_list": [ 00:10:46.593 { 00:10:46.593 "name": null, 00:10:46.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.593 "is_configured": false, 00:10:46.593 "data_offset": 2048, 00:10:46.593 "data_size": 63488 00:10:46.593 }, 00:10:46.593 { 00:10:46.593 "name": "pt2", 00:10:46.593 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.593 "is_configured": true, 00:10:46.593 "data_offset": 2048, 00:10:46.593 "data_size": 63488 00:10:46.593 }, 00:10:46.593 { 00:10:46.593 "name": null, 00:10:46.593 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.593 "is_configured": false, 00:10:46.593 "data_offset": 2048, 00:10:46.593 "data_size": 63488 00:10:46.593 } 00:10:46.593 ] 00:10:46.593 }' 00:10:46.593 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.593 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.160 [2024-11-26 19:00:13.648263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:47.160 [2024-11-26 19:00:13.648372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.160 [2024-11-26 19:00:13.648414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:47.160 [2024-11-26 19:00:13.648434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.160 [2024-11-26 19:00:13.649158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.160 [2024-11-26 19:00:13.649196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:47.160 [2024-11-26 19:00:13.649352] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:47.160 [2024-11-26 19:00:13.649401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:47.160 [2024-11-26 19:00:13.649597] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:47.160 [2024-11-26 19:00:13.649619] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:47.160 [2024-11-26 19:00:13.649960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:47.160 [2024-11-26 19:00:13.650185] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:47.160 [2024-11-26 19:00:13.650202] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:47.160 [2024-11-26 19:00:13.650589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.160 pt3 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.160 "name": "raid_bdev1", 00:10:47.160 "uuid": "a5c51df7-e81d-4ac0-bfc6-bf3f90c31cc6", 00:10:47.160 "strip_size_kb": 0, 00:10:47.160 "state": "online", 00:10:47.160 "raid_level": "raid1", 00:10:47.160 "superblock": true, 00:10:47.160 "num_base_bdevs": 3, 00:10:47.160 "num_base_bdevs_discovered": 2, 00:10:47.160 "num_base_bdevs_operational": 2, 00:10:47.160 "base_bdevs_list": [ 00:10:47.160 { 00:10:47.160 "name": null, 00:10:47.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.160 "is_configured": false, 00:10:47.160 "data_offset": 2048, 00:10:47.160 "data_size": 63488 00:10:47.160 }, 00:10:47.160 { 00:10:47.160 "name": "pt2", 00:10:47.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:47.160 "is_configured": true, 00:10:47.160 "data_offset": 2048, 00:10:47.160 "data_size": 63488 00:10:47.160 }, 00:10:47.160 { 00:10:47.160 "name": "pt3", 00:10:47.160 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:47.160 "is_configured": true, 00:10:47.160 "data_offset": 2048, 00:10:47.160 "data_size": 63488 00:10:47.160 } 00:10:47.160 ] 00:10:47.160 }' 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.160 19:00:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.728 [2024-11-26 19:00:14.200477] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:47.728 [2024-11-26 19:00:14.200521] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:47.728 [2024-11-26 19:00:14.200637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.728 [2024-11-26 19:00:14.200734] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:47.728 [2024-11-26 19:00:14.200750] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.728 [2024-11-26 19:00:14.272557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:47.728 [2024-11-26 19:00:14.272647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.728 [2024-11-26 19:00:14.272681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:47.728 [2024-11-26 19:00:14.272696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.728 [2024-11-26 19:00:14.275964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.728 [2024-11-26 19:00:14.276026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:47.728 [2024-11-26 19:00:14.276175] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:47.728 [2024-11-26 19:00:14.276245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:47.728 [2024-11-26 19:00:14.276451] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:47.728 [2024-11-26 19:00:14.276470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:47.728 [2024-11-26 19:00:14.276494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:47.728 [2024-11-26 19:00:14.276568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:47.728 pt1 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.728 "name": "raid_bdev1", 00:10:47.728 "uuid": "a5c51df7-e81d-4ac0-bfc6-bf3f90c31cc6", 00:10:47.728 "strip_size_kb": 0, 00:10:47.728 "state": "configuring", 00:10:47.728 "raid_level": "raid1", 00:10:47.728 "superblock": true, 00:10:47.728 "num_base_bdevs": 3, 00:10:47.728 "num_base_bdevs_discovered": 1, 00:10:47.728 "num_base_bdevs_operational": 2, 00:10:47.728 "base_bdevs_list": [ 00:10:47.728 { 00:10:47.728 "name": null, 00:10:47.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.728 "is_configured": false, 00:10:47.728 "data_offset": 2048, 00:10:47.728 "data_size": 63488 00:10:47.728 }, 00:10:47.728 { 00:10:47.728 "name": "pt2", 00:10:47.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:47.728 "is_configured": true, 00:10:47.728 "data_offset": 2048, 00:10:47.728 "data_size": 63488 00:10:47.728 }, 00:10:47.728 { 00:10:47.728 "name": null, 00:10:47.728 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:47.728 "is_configured": false, 00:10:47.728 "data_offset": 2048, 00:10:47.728 "data_size": 63488 00:10:47.728 } 00:10:47.728 ] 00:10:47.728 }' 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.728 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.314 [2024-11-26 19:00:14.884826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:48.314 [2024-11-26 19:00:14.884929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.314 [2024-11-26 19:00:14.884965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:48.314 [2024-11-26 19:00:14.884979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.314 [2024-11-26 19:00:14.885748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.314 [2024-11-26 19:00:14.885778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:48.314 [2024-11-26 19:00:14.885911] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:48.314 [2024-11-26 19:00:14.885964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:48.314 [2024-11-26 19:00:14.886128] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:48.314 [2024-11-26 19:00:14.886144] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:48.314 [2024-11-26 19:00:14.886485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:48.314 [2024-11-26 19:00:14.886687] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:48.314 [2024-11-26 19:00:14.886712] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:48.314 [2024-11-26 19:00:14.886888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.314 pt3 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.314 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.572 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.572 "name": "raid_bdev1", 00:10:48.572 "uuid": "a5c51df7-e81d-4ac0-bfc6-bf3f90c31cc6", 00:10:48.572 "strip_size_kb": 0, 00:10:48.572 "state": "online", 00:10:48.572 "raid_level": "raid1", 00:10:48.572 "superblock": true, 00:10:48.572 "num_base_bdevs": 3, 00:10:48.572 "num_base_bdevs_discovered": 2, 00:10:48.572 "num_base_bdevs_operational": 2, 00:10:48.572 "base_bdevs_list": [ 00:10:48.572 { 00:10:48.572 "name": null, 00:10:48.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.572 "is_configured": false, 00:10:48.572 "data_offset": 2048, 00:10:48.572 "data_size": 63488 00:10:48.572 }, 00:10:48.572 { 00:10:48.572 "name": "pt2", 00:10:48.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.572 "is_configured": true, 00:10:48.572 "data_offset": 2048, 00:10:48.572 "data_size": 63488 00:10:48.572 }, 00:10:48.572 { 00:10:48.572 "name": "pt3", 00:10:48.572 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.572 "is_configured": true, 00:10:48.572 "data_offset": 2048, 00:10:48.572 "data_size": 63488 00:10:48.572 } 00:10:48.572 ] 00:10:48.572 }' 00:10:48.572 19:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.572 19:00:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.831 19:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:48.831 19:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:48.831 19:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.831 19:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.831 19:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.090 19:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:49.090 19:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:49.090 19:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:49.090 19:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.090 19:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.090 [2024-11-26 19:00:15.485434] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.090 19:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.090 19:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a5c51df7-e81d-4ac0-bfc6-bf3f90c31cc6 '!=' a5c51df7-e81d-4ac0-bfc6-bf3f90c31cc6 ']' 00:10:49.090 19:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 69071 00:10:49.090 19:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 69071 ']' 00:10:49.090 19:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 69071 00:10:49.090 19:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:49.090 19:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.090 19:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69071 00:10:49.090 19:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.090 killing process with pid 69071 00:10:49.090 19:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.090 19:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69071' 00:10:49.090 19:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 69071 00:10:49.090 [2024-11-26 19:00:15.563342] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:49.090 19:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 69071 00:10:49.090 [2024-11-26 19:00:15.563480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.090 [2024-11-26 19:00:15.563572] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.090 [2024-11-26 19:00:15.563603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:49.349 [2024-11-26 19:00:15.877812] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.723 19:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:50.723 00:10:50.723 real 0m9.091s 00:10:50.723 user 0m14.723s 00:10:50.723 sys 0m1.361s 00:10:50.723 19:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.723 19:00:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.723 ************************************ 00:10:50.723 END TEST raid_superblock_test 00:10:50.723 ************************************ 00:10:50.723 19:00:17 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:50.723 19:00:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:50.723 19:00:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.723 19:00:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.723 ************************************ 00:10:50.723 START TEST raid_read_error_test 00:10:50.723 ************************************ 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.eP2PrIZBus 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69533 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69533 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69533 ']' 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.723 19:00:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.723 [2024-11-26 19:00:17.254441] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:10:50.723 [2024-11-26 19:00:17.254651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69533 ] 00:10:50.981 [2024-11-26 19:00:17.437414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.981 [2024-11-26 19:00:17.591549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.240 [2024-11-26 19:00:17.832883] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.240 [2024-11-26 19:00:17.833018] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.806 BaseBdev1_malloc 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.806 true 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.806 [2024-11-26 19:00:18.345782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:51.806 [2024-11-26 19:00:18.345867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.806 [2024-11-26 19:00:18.345932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:51.806 [2024-11-26 19:00:18.345974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.806 [2024-11-26 19:00:18.349886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.806 [2024-11-26 19:00:18.349965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:51.806 BaseBdev1 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.806 BaseBdev2_malloc 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.806 true 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.806 [2024-11-26 19:00:18.419990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:51.806 [2024-11-26 19:00:18.420084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.806 [2024-11-26 19:00:18.420114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:51.806 [2024-11-26 19:00:18.420132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.806 [2024-11-26 19:00:18.423384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.806 [2024-11-26 19:00:18.423427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:51.806 BaseBdev2 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.806 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:51.807 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.807 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.065 BaseBdev3_malloc 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.065 true 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.065 [2024-11-26 19:00:18.506672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:52.065 [2024-11-26 19:00:18.506794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.065 [2024-11-26 19:00:18.506827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:52.065 [2024-11-26 19:00:18.506846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.065 [2024-11-26 19:00:18.510112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.065 [2024-11-26 19:00:18.510170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:52.065 BaseBdev3 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.065 [2024-11-26 19:00:18.519031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.065 [2024-11-26 19:00:18.521834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.065 [2024-11-26 19:00:18.521974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:52.065 [2024-11-26 19:00:18.522393] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:52.065 [2024-11-26 19:00:18.522415] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:52.065 [2024-11-26 19:00:18.522818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:52.065 [2024-11-26 19:00:18.523129] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:52.065 [2024-11-26 19:00:18.523150] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:52.065 [2024-11-26 19:00:18.523457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.065 "name": "raid_bdev1", 00:10:52.065 "uuid": "956b3a44-2077-46c7-9b57-fb5a04d39ba8", 00:10:52.065 "strip_size_kb": 0, 00:10:52.065 "state": "online", 00:10:52.065 "raid_level": "raid1", 00:10:52.065 "superblock": true, 00:10:52.065 "num_base_bdevs": 3, 00:10:52.065 "num_base_bdevs_discovered": 3, 00:10:52.065 "num_base_bdevs_operational": 3, 00:10:52.065 "base_bdevs_list": [ 00:10:52.065 { 00:10:52.065 "name": "BaseBdev1", 00:10:52.065 "uuid": "6b0d4461-951a-5d75-b32b-c423a4e616e1", 00:10:52.065 "is_configured": true, 00:10:52.065 "data_offset": 2048, 00:10:52.065 "data_size": 63488 00:10:52.065 }, 00:10:52.065 { 00:10:52.065 "name": "BaseBdev2", 00:10:52.065 "uuid": "86b1cef5-aeb1-51db-827b-ee1c4cbd9512", 00:10:52.065 "is_configured": true, 00:10:52.065 "data_offset": 2048, 00:10:52.065 "data_size": 63488 00:10:52.065 }, 00:10:52.065 { 00:10:52.065 "name": "BaseBdev3", 00:10:52.065 "uuid": "3f727bca-569b-5c49-8669-21ea71674633", 00:10:52.065 "is_configured": true, 00:10:52.065 "data_offset": 2048, 00:10:52.065 "data_size": 63488 00:10:52.065 } 00:10:52.065 ] 00:10:52.065 }' 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.065 19:00:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.630 19:00:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:52.630 19:00:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:52.630 [2024-11-26 19:00:19.217227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.564 "name": "raid_bdev1", 00:10:53.564 "uuid": "956b3a44-2077-46c7-9b57-fb5a04d39ba8", 00:10:53.564 "strip_size_kb": 0, 00:10:53.564 "state": "online", 00:10:53.564 "raid_level": "raid1", 00:10:53.564 "superblock": true, 00:10:53.564 "num_base_bdevs": 3, 00:10:53.564 "num_base_bdevs_discovered": 3, 00:10:53.564 "num_base_bdevs_operational": 3, 00:10:53.564 "base_bdevs_list": [ 00:10:53.564 { 00:10:53.564 "name": "BaseBdev1", 00:10:53.564 "uuid": "6b0d4461-951a-5d75-b32b-c423a4e616e1", 00:10:53.564 "is_configured": true, 00:10:53.564 "data_offset": 2048, 00:10:53.564 "data_size": 63488 00:10:53.564 }, 00:10:53.564 { 00:10:53.564 "name": "BaseBdev2", 00:10:53.564 "uuid": "86b1cef5-aeb1-51db-827b-ee1c4cbd9512", 00:10:53.564 "is_configured": true, 00:10:53.564 "data_offset": 2048, 00:10:53.564 "data_size": 63488 00:10:53.564 }, 00:10:53.564 { 00:10:53.564 "name": "BaseBdev3", 00:10:53.564 "uuid": "3f727bca-569b-5c49-8669-21ea71674633", 00:10:53.564 "is_configured": true, 00:10:53.564 "data_offset": 2048, 00:10:53.564 "data_size": 63488 00:10:53.564 } 00:10:53.564 ] 00:10:53.564 }' 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.564 19:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.169 19:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:54.169 19:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.169 19:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.169 [2024-11-26 19:00:20.663753] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:54.169 [2024-11-26 19:00:20.663800] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:54.169 [2024-11-26 19:00:20.667259] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.169 [2024-11-26 19:00:20.667347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.169 [2024-11-26 19:00:20.667514] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:54.169 [2024-11-26 19:00:20.667534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:54.169 { 00:10:54.169 "results": [ 00:10:54.169 { 00:10:54.169 "job": "raid_bdev1", 00:10:54.169 "core_mask": "0x1", 00:10:54.169 "workload": "randrw", 00:10:54.169 "percentage": 50, 00:10:54.169 "status": "finished", 00:10:54.169 "queue_depth": 1, 00:10:54.169 "io_size": 131072, 00:10:54.169 "runtime": 1.443982, 00:10:54.169 "iops": 7623.36372614063, 00:10:54.169 "mibps": 952.9204657675788, 00:10:54.169 "io_failed": 0, 00:10:54.169 "io_timeout": 0, 00:10:54.169 "avg_latency_us": 126.7359619450317, 00:10:54.169 "min_latency_us": 40.261818181818185, 00:10:54.169 "max_latency_us": 1995.8690909090908 00:10:54.169 } 00:10:54.169 ], 00:10:54.169 "core_count": 1 00:10:54.169 } 00:10:54.169 19:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.169 19:00:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69533 00:10:54.169 19:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69533 ']' 00:10:54.169 19:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69533 00:10:54.169 19:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:54.169 19:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.169 19:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69533 00:10:54.169 19:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:54.169 killing process with pid 69533 00:10:54.169 19:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:54.169 19:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69533' 00:10:54.169 19:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69533 00:10:54.169 [2024-11-26 19:00:20.706643] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:54.169 19:00:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69533 00:10:54.428 [2024-11-26 19:00:20.957704] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:55.803 19:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.eP2PrIZBus 00:10:55.803 19:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:55.803 19:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:55.803 19:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:55.803 19:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:55.803 19:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:55.803 19:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:55.803 19:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:55.803 00:10:55.803 real 0m5.065s 00:10:55.803 user 0m6.247s 00:10:55.803 sys 0m0.688s 00:10:55.803 19:00:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.803 ************************************ 00:10:55.803 END TEST raid_read_error_test 00:10:55.803 ************************************ 00:10:55.803 19:00:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.803 19:00:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:55.803 19:00:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:55.803 19:00:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.803 19:00:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:55.803 ************************************ 00:10:55.803 START TEST raid_write_error_test 00:10:55.803 ************************************ 00:10:55.803 19:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:55.803 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:55.803 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:55.803 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:55.803 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:55.803 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:55.803 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:55.803 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:55.803 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:55.803 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:55.803 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:55.803 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:55.803 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:55.803 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:55.803 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:55.803 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:55.803 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:55.803 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:55.804 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:55.804 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:55.804 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:55.804 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:55.804 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:55.804 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:55.804 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:55.804 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fIumhCK6q5 00:10:55.804 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69681 00:10:55.804 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69681 00:10:55.804 19:00:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:55.804 19:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69681 ']' 00:10:55.804 19:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.804 19:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.804 19:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.804 19:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.804 19:00:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.804 [2024-11-26 19:00:22.375654] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:10:55.804 [2024-11-26 19:00:22.375858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69681 ] 00:10:56.063 [2024-11-26 19:00:22.565395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.322 [2024-11-26 19:00:22.717442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.580 [2024-11-26 19:00:22.947443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.580 [2024-11-26 19:00:22.947567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.839 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.839 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:56.839 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:56.839 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:56.839 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.839 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.839 BaseBdev1_malloc 00:10:56.839 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.839 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:56.839 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.839 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.839 true 00:10:56.839 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.839 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:56.839 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.839 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.839 [2024-11-26 19:00:23.419079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:56.839 [2024-11-26 19:00:23.419162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.839 [2024-11-26 19:00:23.419193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:56.839 [2024-11-26 19:00:23.419210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.839 [2024-11-26 19:00:23.422285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.839 [2024-11-26 19:00:23.422354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:56.839 BaseBdev1 00:10:56.839 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.839 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:56.839 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:56.839 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.839 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.098 BaseBdev2_malloc 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.098 true 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.098 [2024-11-26 19:00:23.482008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:57.098 [2024-11-26 19:00:23.482079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.098 [2024-11-26 19:00:23.482108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:57.098 [2024-11-26 19:00:23.482125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.098 [2024-11-26 19:00:23.485326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.098 [2024-11-26 19:00:23.485370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:57.098 BaseBdev2 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.098 BaseBdev3_malloc 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.098 true 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.098 [2024-11-26 19:00:23.552217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:57.098 [2024-11-26 19:00:23.552299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.098 [2024-11-26 19:00:23.552331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:57.098 [2024-11-26 19:00:23.552365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.098 [2024-11-26 19:00:23.555499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.098 [2024-11-26 19:00:23.555546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:57.098 BaseBdev3 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.098 [2024-11-26 19:00:23.560493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:57.098 [2024-11-26 19:00:23.563236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.098 [2024-11-26 19:00:23.563367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:57.098 [2024-11-26 19:00:23.563680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:57.098 [2024-11-26 19:00:23.563699] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:57.098 [2024-11-26 19:00:23.564063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:57.098 [2024-11-26 19:00:23.564352] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:57.098 [2024-11-26 19:00:23.564373] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:57.098 [2024-11-26 19:00:23.564649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.098 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.098 "name": "raid_bdev1", 00:10:57.098 "uuid": "6b1b78a5-20cd-4747-aa15-255a68256f6a", 00:10:57.098 "strip_size_kb": 0, 00:10:57.098 "state": "online", 00:10:57.098 "raid_level": "raid1", 00:10:57.098 "superblock": true, 00:10:57.098 "num_base_bdevs": 3, 00:10:57.098 "num_base_bdevs_discovered": 3, 00:10:57.099 "num_base_bdevs_operational": 3, 00:10:57.099 "base_bdevs_list": [ 00:10:57.099 { 00:10:57.099 "name": "BaseBdev1", 00:10:57.099 "uuid": "45f0665c-830c-54c1-811a-d74cef4a6759", 00:10:57.099 "is_configured": true, 00:10:57.099 "data_offset": 2048, 00:10:57.099 "data_size": 63488 00:10:57.099 }, 00:10:57.099 { 00:10:57.099 "name": "BaseBdev2", 00:10:57.099 "uuid": "7f7ad8b7-89a5-5638-b0a8-4378edcb8438", 00:10:57.099 "is_configured": true, 00:10:57.099 "data_offset": 2048, 00:10:57.099 "data_size": 63488 00:10:57.099 }, 00:10:57.099 { 00:10:57.099 "name": "BaseBdev3", 00:10:57.099 "uuid": "e51196c9-ae34-53ff-ac52-e82077a0c1e7", 00:10:57.099 "is_configured": true, 00:10:57.099 "data_offset": 2048, 00:10:57.099 "data_size": 63488 00:10:57.099 } 00:10:57.099 ] 00:10:57.099 }' 00:10:57.099 19:00:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.099 19:00:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.665 19:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:57.665 19:00:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:57.665 [2024-11-26 19:00:24.254353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:58.598 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:58.598 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.598 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.598 [2024-11-26 19:00:25.098185] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:58.598 [2024-11-26 19:00:25.098245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:58.598 [2024-11-26 19:00:25.098532] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:58.598 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.598 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:58.598 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:58.598 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:58.598 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:58.598 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:58.598 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.598 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.598 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.598 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.598 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:58.598 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.599 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.599 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.599 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.599 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.599 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.599 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.599 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.599 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.599 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.599 "name": "raid_bdev1", 00:10:58.599 "uuid": "6b1b78a5-20cd-4747-aa15-255a68256f6a", 00:10:58.599 "strip_size_kb": 0, 00:10:58.599 "state": "online", 00:10:58.599 "raid_level": "raid1", 00:10:58.599 "superblock": true, 00:10:58.599 "num_base_bdevs": 3, 00:10:58.599 "num_base_bdevs_discovered": 2, 00:10:58.599 "num_base_bdevs_operational": 2, 00:10:58.599 "base_bdevs_list": [ 00:10:58.599 { 00:10:58.599 "name": null, 00:10:58.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.599 "is_configured": false, 00:10:58.599 "data_offset": 0, 00:10:58.599 "data_size": 63488 00:10:58.599 }, 00:10:58.599 { 00:10:58.599 "name": "BaseBdev2", 00:10:58.599 "uuid": "7f7ad8b7-89a5-5638-b0a8-4378edcb8438", 00:10:58.599 "is_configured": true, 00:10:58.599 "data_offset": 2048, 00:10:58.599 "data_size": 63488 00:10:58.599 }, 00:10:58.599 { 00:10:58.599 "name": "BaseBdev3", 00:10:58.599 "uuid": "e51196c9-ae34-53ff-ac52-e82077a0c1e7", 00:10:58.599 "is_configured": true, 00:10:58.599 "data_offset": 2048, 00:10:58.599 "data_size": 63488 00:10:58.599 } 00:10:58.599 ] 00:10:58.599 }' 00:10:58.599 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.599 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.166 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:59.166 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.166 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.166 [2024-11-26 19:00:25.645197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:59.166 [2024-11-26 19:00:25.645242] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:59.166 [2024-11-26 19:00:25.648785] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.166 [2024-11-26 19:00:25.648863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.166 [2024-11-26 19:00:25.648978] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:59.166 [2024-11-26 19:00:25.649003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:59.166 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.166 { 00:10:59.166 "results": [ 00:10:59.166 { 00:10:59.166 "job": "raid_bdev1", 00:10:59.166 "core_mask": "0x1", 00:10:59.166 "workload": "randrw", 00:10:59.166 "percentage": 50, 00:10:59.166 "status": "finished", 00:10:59.166 "queue_depth": 1, 00:10:59.166 "io_size": 131072, 00:10:59.166 "runtime": 1.388331, 00:10:59.166 "iops": 8454.035817107015, 00:10:59.166 "mibps": 1056.7544771383768, 00:10:59.166 "io_failed": 0, 00:10:59.166 "io_timeout": 0, 00:10:59.166 "avg_latency_us": 113.90131162524109, 00:10:59.166 "min_latency_us": 42.82181818181818, 00:10:59.166 "max_latency_us": 1839.4763636363637 00:10:59.166 } 00:10:59.166 ], 00:10:59.166 "core_count": 1 00:10:59.166 } 00:10:59.166 19:00:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69681 00:10:59.166 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69681 ']' 00:10:59.166 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69681 00:10:59.166 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:59.166 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.166 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69681 00:10:59.166 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:59.166 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:59.166 killing process with pid 69681 00:10:59.166 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69681' 00:10:59.166 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69681 00:10:59.166 [2024-11-26 19:00:25.677083] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:59.166 19:00:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69681 00:10:59.425 [2024-11-26 19:00:25.904587] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:00.799 19:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fIumhCK6q5 00:11:00.799 19:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:00.799 19:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:00.799 19:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:00.799 19:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:00.799 19:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:00.799 19:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:00.799 19:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:00.799 00:11:00.799 real 0m4.909s 00:11:00.799 user 0m6.063s 00:11:00.799 sys 0m0.637s 00:11:00.799 19:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.799 19:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.799 ************************************ 00:11:00.799 END TEST raid_write_error_test 00:11:00.799 ************************************ 00:11:00.799 19:00:27 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:00.799 19:00:27 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:00.799 19:00:27 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:11:00.799 19:00:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:00.799 19:00:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.799 19:00:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:00.799 ************************************ 00:11:00.799 START TEST raid_state_function_test 00:11:00.799 ************************************ 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:00.799 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:00.799 Process raid pid: 69830 00:11:00.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.800 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69830 00:11:00.800 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69830' 00:11:00.800 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69830 00:11:00.800 19:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69830 ']' 00:11:00.800 19:00:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:00.800 19:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.800 19:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.800 19:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.800 19:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.800 19:00:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.800 [2024-11-26 19:00:27.333207] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:11:00.800 [2024-11-26 19:00:27.333646] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.059 [2024-11-26 19:00:27.527887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.317 [2024-11-26 19:00:27.686542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.318 [2024-11-26 19:00:27.933669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.318 [2024-11-26 19:00:27.934071] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.883 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.883 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:01.883 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:01.883 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.883 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.883 [2024-11-26 19:00:28.343352] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:01.883 [2024-11-26 19:00:28.343422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:01.883 [2024-11-26 19:00:28.343441] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:01.883 [2024-11-26 19:00:28.343459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:01.883 [2024-11-26 19:00:28.343470] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:01.883 [2024-11-26 19:00:28.343484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:01.883 [2024-11-26 19:00:28.343494] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:01.883 [2024-11-26 19:00:28.343509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:01.883 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.883 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:01.883 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.883 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.883 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.883 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.883 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.883 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.883 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.883 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.883 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.883 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.883 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.883 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.883 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.883 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.883 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.883 "name": "Existed_Raid", 00:11:01.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.883 "strip_size_kb": 64, 00:11:01.883 "state": "configuring", 00:11:01.883 "raid_level": "raid0", 00:11:01.883 "superblock": false, 00:11:01.883 "num_base_bdevs": 4, 00:11:01.883 "num_base_bdevs_discovered": 0, 00:11:01.883 "num_base_bdevs_operational": 4, 00:11:01.883 "base_bdevs_list": [ 00:11:01.884 { 00:11:01.884 "name": "BaseBdev1", 00:11:01.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.884 "is_configured": false, 00:11:01.884 "data_offset": 0, 00:11:01.884 "data_size": 0 00:11:01.884 }, 00:11:01.884 { 00:11:01.884 "name": "BaseBdev2", 00:11:01.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.884 "is_configured": false, 00:11:01.884 "data_offset": 0, 00:11:01.884 "data_size": 0 00:11:01.884 }, 00:11:01.884 { 00:11:01.884 "name": "BaseBdev3", 00:11:01.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.884 "is_configured": false, 00:11:01.884 "data_offset": 0, 00:11:01.884 "data_size": 0 00:11:01.884 }, 00:11:01.884 { 00:11:01.884 "name": "BaseBdev4", 00:11:01.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.884 "is_configured": false, 00:11:01.884 "data_offset": 0, 00:11:01.884 "data_size": 0 00:11:01.884 } 00:11:01.884 ] 00:11:01.884 }' 00:11:01.884 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.884 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.451 [2024-11-26 19:00:28.875429] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.451 [2024-11-26 19:00:28.875485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.451 [2024-11-26 19:00:28.883463] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:02.451 [2024-11-26 19:00:28.883525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:02.451 [2024-11-26 19:00:28.883543] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:02.451 [2024-11-26 19:00:28.883560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:02.451 [2024-11-26 19:00:28.883578] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:02.451 [2024-11-26 19:00:28.883593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:02.451 [2024-11-26 19:00:28.883602] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:02.451 [2024-11-26 19:00:28.883617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.451 [2024-11-26 19:00:28.935114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:02.451 BaseBdev1 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.451 [ 00:11:02.451 { 00:11:02.451 "name": "BaseBdev1", 00:11:02.451 "aliases": [ 00:11:02.451 "3dcf2302-c06c-49df-a4fd-0f3479a86cc3" 00:11:02.451 ], 00:11:02.451 "product_name": "Malloc disk", 00:11:02.451 "block_size": 512, 00:11:02.451 "num_blocks": 65536, 00:11:02.451 "uuid": "3dcf2302-c06c-49df-a4fd-0f3479a86cc3", 00:11:02.451 "assigned_rate_limits": { 00:11:02.451 "rw_ios_per_sec": 0, 00:11:02.451 "rw_mbytes_per_sec": 0, 00:11:02.451 "r_mbytes_per_sec": 0, 00:11:02.451 "w_mbytes_per_sec": 0 00:11:02.451 }, 00:11:02.451 "claimed": true, 00:11:02.451 "claim_type": "exclusive_write", 00:11:02.451 "zoned": false, 00:11:02.451 "supported_io_types": { 00:11:02.451 "read": true, 00:11:02.451 "write": true, 00:11:02.451 "unmap": true, 00:11:02.451 "flush": true, 00:11:02.451 "reset": true, 00:11:02.451 "nvme_admin": false, 00:11:02.451 "nvme_io": false, 00:11:02.451 "nvme_io_md": false, 00:11:02.451 "write_zeroes": true, 00:11:02.451 "zcopy": true, 00:11:02.451 "get_zone_info": false, 00:11:02.451 "zone_management": false, 00:11:02.451 "zone_append": false, 00:11:02.451 "compare": false, 00:11:02.451 "compare_and_write": false, 00:11:02.451 "abort": true, 00:11:02.451 "seek_hole": false, 00:11:02.451 "seek_data": false, 00:11:02.451 "copy": true, 00:11:02.451 "nvme_iov_md": false 00:11:02.451 }, 00:11:02.451 "memory_domains": [ 00:11:02.451 { 00:11:02.451 "dma_device_id": "system", 00:11:02.451 "dma_device_type": 1 00:11:02.451 }, 00:11:02.451 { 00:11:02.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.451 "dma_device_type": 2 00:11:02.451 } 00:11:02.451 ], 00:11:02.451 "driver_specific": {} 00:11:02.451 } 00:11:02.451 ] 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.451 19:00:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.451 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.451 "name": "Existed_Raid", 00:11:02.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.451 "strip_size_kb": 64, 00:11:02.451 "state": "configuring", 00:11:02.451 "raid_level": "raid0", 00:11:02.451 "superblock": false, 00:11:02.451 "num_base_bdevs": 4, 00:11:02.451 "num_base_bdevs_discovered": 1, 00:11:02.451 "num_base_bdevs_operational": 4, 00:11:02.451 "base_bdevs_list": [ 00:11:02.451 { 00:11:02.451 "name": "BaseBdev1", 00:11:02.451 "uuid": "3dcf2302-c06c-49df-a4fd-0f3479a86cc3", 00:11:02.451 "is_configured": true, 00:11:02.451 "data_offset": 0, 00:11:02.451 "data_size": 65536 00:11:02.451 }, 00:11:02.451 { 00:11:02.451 "name": "BaseBdev2", 00:11:02.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.451 "is_configured": false, 00:11:02.451 "data_offset": 0, 00:11:02.451 "data_size": 0 00:11:02.451 }, 00:11:02.451 { 00:11:02.452 "name": "BaseBdev3", 00:11:02.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.452 "is_configured": false, 00:11:02.452 "data_offset": 0, 00:11:02.452 "data_size": 0 00:11:02.452 }, 00:11:02.452 { 00:11:02.452 "name": "BaseBdev4", 00:11:02.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.452 "is_configured": false, 00:11:02.452 "data_offset": 0, 00:11:02.452 "data_size": 0 00:11:02.452 } 00:11:02.452 ] 00:11:02.452 }' 00:11:02.452 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.452 19:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.017 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:03.017 19:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.017 19:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.017 [2024-11-26 19:00:29.567357] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.017 [2024-11-26 19:00:29.567565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:03.017 19:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.017 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:03.017 19:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.017 19:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.017 [2024-11-26 19:00:29.575431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.017 [2024-11-26 19:00:29.578042] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:03.017 [2024-11-26 19:00:29.578097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:03.017 [2024-11-26 19:00:29.578131] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:03.018 [2024-11-26 19:00:29.578148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:03.018 [2024-11-26 19:00:29.578158] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:03.018 [2024-11-26 19:00:29.578171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:03.018 19:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.018 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:03.018 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.018 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:03.018 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.018 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.018 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.018 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.018 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.018 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.018 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.018 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.018 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.018 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.018 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.018 19:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.018 19:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.018 19:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.018 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.018 "name": "Existed_Raid", 00:11:03.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.018 "strip_size_kb": 64, 00:11:03.018 "state": "configuring", 00:11:03.018 "raid_level": "raid0", 00:11:03.018 "superblock": false, 00:11:03.018 "num_base_bdevs": 4, 00:11:03.018 "num_base_bdevs_discovered": 1, 00:11:03.018 "num_base_bdevs_operational": 4, 00:11:03.018 "base_bdevs_list": [ 00:11:03.018 { 00:11:03.018 "name": "BaseBdev1", 00:11:03.018 "uuid": "3dcf2302-c06c-49df-a4fd-0f3479a86cc3", 00:11:03.018 "is_configured": true, 00:11:03.018 "data_offset": 0, 00:11:03.018 "data_size": 65536 00:11:03.018 }, 00:11:03.018 { 00:11:03.018 "name": "BaseBdev2", 00:11:03.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.018 "is_configured": false, 00:11:03.018 "data_offset": 0, 00:11:03.018 "data_size": 0 00:11:03.018 }, 00:11:03.018 { 00:11:03.018 "name": "BaseBdev3", 00:11:03.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.018 "is_configured": false, 00:11:03.018 "data_offset": 0, 00:11:03.018 "data_size": 0 00:11:03.018 }, 00:11:03.018 { 00:11:03.018 "name": "BaseBdev4", 00:11:03.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.018 "is_configured": false, 00:11:03.018 "data_offset": 0, 00:11:03.018 "data_size": 0 00:11:03.018 } 00:11:03.018 ] 00:11:03.018 }' 00:11:03.018 19:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.018 19:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.585 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:03.585 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.585 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.585 [2024-11-26 19:00:30.114843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.585 BaseBdev2 00:11:03.585 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.585 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:03.585 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:03.585 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:03.585 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:03.585 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:03.585 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:03.585 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:03.585 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.585 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.585 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.585 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:03.585 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.585 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.585 [ 00:11:03.585 { 00:11:03.585 "name": "BaseBdev2", 00:11:03.585 "aliases": [ 00:11:03.585 "e2fcd8b4-af72-4222-ac49-7e40bcaf2ddf" 00:11:03.585 ], 00:11:03.585 "product_name": "Malloc disk", 00:11:03.585 "block_size": 512, 00:11:03.585 "num_blocks": 65536, 00:11:03.585 "uuid": "e2fcd8b4-af72-4222-ac49-7e40bcaf2ddf", 00:11:03.585 "assigned_rate_limits": { 00:11:03.585 "rw_ios_per_sec": 0, 00:11:03.585 "rw_mbytes_per_sec": 0, 00:11:03.585 "r_mbytes_per_sec": 0, 00:11:03.585 "w_mbytes_per_sec": 0 00:11:03.585 }, 00:11:03.585 "claimed": true, 00:11:03.585 "claim_type": "exclusive_write", 00:11:03.585 "zoned": false, 00:11:03.585 "supported_io_types": { 00:11:03.585 "read": true, 00:11:03.585 "write": true, 00:11:03.585 "unmap": true, 00:11:03.585 "flush": true, 00:11:03.585 "reset": true, 00:11:03.585 "nvme_admin": false, 00:11:03.585 "nvme_io": false, 00:11:03.585 "nvme_io_md": false, 00:11:03.585 "write_zeroes": true, 00:11:03.585 "zcopy": true, 00:11:03.585 "get_zone_info": false, 00:11:03.585 "zone_management": false, 00:11:03.585 "zone_append": false, 00:11:03.585 "compare": false, 00:11:03.585 "compare_and_write": false, 00:11:03.585 "abort": true, 00:11:03.585 "seek_hole": false, 00:11:03.585 "seek_data": false, 00:11:03.585 "copy": true, 00:11:03.585 "nvme_iov_md": false 00:11:03.585 }, 00:11:03.585 "memory_domains": [ 00:11:03.585 { 00:11:03.585 "dma_device_id": "system", 00:11:03.585 "dma_device_type": 1 00:11:03.585 }, 00:11:03.585 { 00:11:03.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.585 "dma_device_type": 2 00:11:03.585 } 00:11:03.585 ], 00:11:03.585 "driver_specific": {} 00:11:03.585 } 00:11:03.585 ] 00:11:03.585 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.585 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:03.586 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:03.586 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.586 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:03.586 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.586 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.586 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.586 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.586 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.586 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.586 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.586 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.586 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.586 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.586 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.586 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.586 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.586 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.586 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.586 "name": "Existed_Raid", 00:11:03.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.586 "strip_size_kb": 64, 00:11:03.586 "state": "configuring", 00:11:03.586 "raid_level": "raid0", 00:11:03.586 "superblock": false, 00:11:03.586 "num_base_bdevs": 4, 00:11:03.586 "num_base_bdevs_discovered": 2, 00:11:03.586 "num_base_bdevs_operational": 4, 00:11:03.586 "base_bdevs_list": [ 00:11:03.586 { 00:11:03.586 "name": "BaseBdev1", 00:11:03.586 "uuid": "3dcf2302-c06c-49df-a4fd-0f3479a86cc3", 00:11:03.586 "is_configured": true, 00:11:03.586 "data_offset": 0, 00:11:03.586 "data_size": 65536 00:11:03.586 }, 00:11:03.586 { 00:11:03.586 "name": "BaseBdev2", 00:11:03.586 "uuid": "e2fcd8b4-af72-4222-ac49-7e40bcaf2ddf", 00:11:03.586 "is_configured": true, 00:11:03.586 "data_offset": 0, 00:11:03.586 "data_size": 65536 00:11:03.586 }, 00:11:03.586 { 00:11:03.586 "name": "BaseBdev3", 00:11:03.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.586 "is_configured": false, 00:11:03.586 "data_offset": 0, 00:11:03.586 "data_size": 0 00:11:03.586 }, 00:11:03.586 { 00:11:03.586 "name": "BaseBdev4", 00:11:03.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.586 "is_configured": false, 00:11:03.586 "data_offset": 0, 00:11:03.586 "data_size": 0 00:11:03.586 } 00:11:03.586 ] 00:11:03.586 }' 00:11:03.586 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.586 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.153 [2024-11-26 19:00:30.715050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:04.153 BaseBdev3 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.153 [ 00:11:04.153 { 00:11:04.153 "name": "BaseBdev3", 00:11:04.153 "aliases": [ 00:11:04.153 "4e601faf-2d36-4fbb-9fd0-60a8397a315d" 00:11:04.153 ], 00:11:04.153 "product_name": "Malloc disk", 00:11:04.153 "block_size": 512, 00:11:04.153 "num_blocks": 65536, 00:11:04.153 "uuid": "4e601faf-2d36-4fbb-9fd0-60a8397a315d", 00:11:04.153 "assigned_rate_limits": { 00:11:04.153 "rw_ios_per_sec": 0, 00:11:04.153 "rw_mbytes_per_sec": 0, 00:11:04.153 "r_mbytes_per_sec": 0, 00:11:04.153 "w_mbytes_per_sec": 0 00:11:04.153 }, 00:11:04.153 "claimed": true, 00:11:04.153 "claim_type": "exclusive_write", 00:11:04.153 "zoned": false, 00:11:04.153 "supported_io_types": { 00:11:04.153 "read": true, 00:11:04.153 "write": true, 00:11:04.153 "unmap": true, 00:11:04.153 "flush": true, 00:11:04.153 "reset": true, 00:11:04.153 "nvme_admin": false, 00:11:04.153 "nvme_io": false, 00:11:04.153 "nvme_io_md": false, 00:11:04.153 "write_zeroes": true, 00:11:04.153 "zcopy": true, 00:11:04.153 "get_zone_info": false, 00:11:04.153 "zone_management": false, 00:11:04.153 "zone_append": false, 00:11:04.153 "compare": false, 00:11:04.153 "compare_and_write": false, 00:11:04.153 "abort": true, 00:11:04.153 "seek_hole": false, 00:11:04.153 "seek_data": false, 00:11:04.153 "copy": true, 00:11:04.153 "nvme_iov_md": false 00:11:04.153 }, 00:11:04.153 "memory_domains": [ 00:11:04.153 { 00:11:04.153 "dma_device_id": "system", 00:11:04.153 "dma_device_type": 1 00:11:04.153 }, 00:11:04.153 { 00:11:04.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.153 "dma_device_type": 2 00:11:04.153 } 00:11:04.153 ], 00:11:04.153 "driver_specific": {} 00:11:04.153 } 00:11:04.153 ] 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.153 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.411 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.411 "name": "Existed_Raid", 00:11:04.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.411 "strip_size_kb": 64, 00:11:04.411 "state": "configuring", 00:11:04.411 "raid_level": "raid0", 00:11:04.411 "superblock": false, 00:11:04.411 "num_base_bdevs": 4, 00:11:04.411 "num_base_bdevs_discovered": 3, 00:11:04.411 "num_base_bdevs_operational": 4, 00:11:04.411 "base_bdevs_list": [ 00:11:04.411 { 00:11:04.411 "name": "BaseBdev1", 00:11:04.412 "uuid": "3dcf2302-c06c-49df-a4fd-0f3479a86cc3", 00:11:04.412 "is_configured": true, 00:11:04.412 "data_offset": 0, 00:11:04.412 "data_size": 65536 00:11:04.412 }, 00:11:04.412 { 00:11:04.412 "name": "BaseBdev2", 00:11:04.412 "uuid": "e2fcd8b4-af72-4222-ac49-7e40bcaf2ddf", 00:11:04.412 "is_configured": true, 00:11:04.412 "data_offset": 0, 00:11:04.412 "data_size": 65536 00:11:04.412 }, 00:11:04.412 { 00:11:04.412 "name": "BaseBdev3", 00:11:04.412 "uuid": "4e601faf-2d36-4fbb-9fd0-60a8397a315d", 00:11:04.412 "is_configured": true, 00:11:04.412 "data_offset": 0, 00:11:04.412 "data_size": 65536 00:11:04.412 }, 00:11:04.412 { 00:11:04.412 "name": "BaseBdev4", 00:11:04.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.412 "is_configured": false, 00:11:04.412 "data_offset": 0, 00:11:04.412 "data_size": 0 00:11:04.412 } 00:11:04.412 ] 00:11:04.412 }' 00:11:04.412 19:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.412 19:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.670 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:04.670 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.670 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.932 [2024-11-26 19:00:31.326876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:04.932 [2024-11-26 19:00:31.326944] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:04.932 [2024-11-26 19:00:31.326959] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:04.932 [2024-11-26 19:00:31.327362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:04.932 [2024-11-26 19:00:31.327586] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:04.932 [2024-11-26 19:00:31.327609] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:04.932 [2024-11-26 19:00:31.327949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.932 BaseBdev4 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.932 [ 00:11:04.932 { 00:11:04.932 "name": "BaseBdev4", 00:11:04.932 "aliases": [ 00:11:04.932 "df061c7e-0215-4caf-a033-80c6e8687845" 00:11:04.932 ], 00:11:04.932 "product_name": "Malloc disk", 00:11:04.932 "block_size": 512, 00:11:04.932 "num_blocks": 65536, 00:11:04.932 "uuid": "df061c7e-0215-4caf-a033-80c6e8687845", 00:11:04.932 "assigned_rate_limits": { 00:11:04.932 "rw_ios_per_sec": 0, 00:11:04.932 "rw_mbytes_per_sec": 0, 00:11:04.932 "r_mbytes_per_sec": 0, 00:11:04.932 "w_mbytes_per_sec": 0 00:11:04.932 }, 00:11:04.932 "claimed": true, 00:11:04.932 "claim_type": "exclusive_write", 00:11:04.932 "zoned": false, 00:11:04.932 "supported_io_types": { 00:11:04.932 "read": true, 00:11:04.932 "write": true, 00:11:04.932 "unmap": true, 00:11:04.932 "flush": true, 00:11:04.932 "reset": true, 00:11:04.932 "nvme_admin": false, 00:11:04.932 "nvme_io": false, 00:11:04.932 "nvme_io_md": false, 00:11:04.932 "write_zeroes": true, 00:11:04.932 "zcopy": true, 00:11:04.932 "get_zone_info": false, 00:11:04.932 "zone_management": false, 00:11:04.932 "zone_append": false, 00:11:04.932 "compare": false, 00:11:04.932 "compare_and_write": false, 00:11:04.932 "abort": true, 00:11:04.932 "seek_hole": false, 00:11:04.932 "seek_data": false, 00:11:04.932 "copy": true, 00:11:04.932 "nvme_iov_md": false 00:11:04.932 }, 00:11:04.932 "memory_domains": [ 00:11:04.932 { 00:11:04.932 "dma_device_id": "system", 00:11:04.932 "dma_device_type": 1 00:11:04.932 }, 00:11:04.932 { 00:11:04.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.932 "dma_device_type": 2 00:11:04.932 } 00:11:04.932 ], 00:11:04.932 "driver_specific": {} 00:11:04.932 } 00:11:04.932 ] 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.932 "name": "Existed_Raid", 00:11:04.932 "uuid": "00e830e7-bebc-40e8-8167-ef569619604b", 00:11:04.932 "strip_size_kb": 64, 00:11:04.932 "state": "online", 00:11:04.932 "raid_level": "raid0", 00:11:04.932 "superblock": false, 00:11:04.932 "num_base_bdevs": 4, 00:11:04.932 "num_base_bdevs_discovered": 4, 00:11:04.932 "num_base_bdevs_operational": 4, 00:11:04.932 "base_bdevs_list": [ 00:11:04.932 { 00:11:04.932 "name": "BaseBdev1", 00:11:04.932 "uuid": "3dcf2302-c06c-49df-a4fd-0f3479a86cc3", 00:11:04.932 "is_configured": true, 00:11:04.932 "data_offset": 0, 00:11:04.932 "data_size": 65536 00:11:04.932 }, 00:11:04.932 { 00:11:04.932 "name": "BaseBdev2", 00:11:04.932 "uuid": "e2fcd8b4-af72-4222-ac49-7e40bcaf2ddf", 00:11:04.932 "is_configured": true, 00:11:04.932 "data_offset": 0, 00:11:04.932 "data_size": 65536 00:11:04.932 }, 00:11:04.932 { 00:11:04.932 "name": "BaseBdev3", 00:11:04.932 "uuid": "4e601faf-2d36-4fbb-9fd0-60a8397a315d", 00:11:04.932 "is_configured": true, 00:11:04.932 "data_offset": 0, 00:11:04.932 "data_size": 65536 00:11:04.932 }, 00:11:04.932 { 00:11:04.932 "name": "BaseBdev4", 00:11:04.932 "uuid": "df061c7e-0215-4caf-a033-80c6e8687845", 00:11:04.932 "is_configured": true, 00:11:04.932 "data_offset": 0, 00:11:04.932 "data_size": 65536 00:11:04.932 } 00:11:04.932 ] 00:11:04.932 }' 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.932 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.499 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:05.499 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:05.499 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:05.499 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:05.499 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:05.499 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:05.499 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:05.499 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:05.499 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.499 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.499 [2024-11-26 19:00:31.915614] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.499 19:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.499 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:05.499 "name": "Existed_Raid", 00:11:05.499 "aliases": [ 00:11:05.499 "00e830e7-bebc-40e8-8167-ef569619604b" 00:11:05.499 ], 00:11:05.499 "product_name": "Raid Volume", 00:11:05.499 "block_size": 512, 00:11:05.499 "num_blocks": 262144, 00:11:05.499 "uuid": "00e830e7-bebc-40e8-8167-ef569619604b", 00:11:05.499 "assigned_rate_limits": { 00:11:05.499 "rw_ios_per_sec": 0, 00:11:05.499 "rw_mbytes_per_sec": 0, 00:11:05.499 "r_mbytes_per_sec": 0, 00:11:05.499 "w_mbytes_per_sec": 0 00:11:05.499 }, 00:11:05.499 "claimed": false, 00:11:05.499 "zoned": false, 00:11:05.499 "supported_io_types": { 00:11:05.499 "read": true, 00:11:05.499 "write": true, 00:11:05.499 "unmap": true, 00:11:05.499 "flush": true, 00:11:05.499 "reset": true, 00:11:05.499 "nvme_admin": false, 00:11:05.499 "nvme_io": false, 00:11:05.499 "nvme_io_md": false, 00:11:05.499 "write_zeroes": true, 00:11:05.499 "zcopy": false, 00:11:05.499 "get_zone_info": false, 00:11:05.499 "zone_management": false, 00:11:05.499 "zone_append": false, 00:11:05.499 "compare": false, 00:11:05.499 "compare_and_write": false, 00:11:05.499 "abort": false, 00:11:05.499 "seek_hole": false, 00:11:05.499 "seek_data": false, 00:11:05.499 "copy": false, 00:11:05.499 "nvme_iov_md": false 00:11:05.499 }, 00:11:05.499 "memory_domains": [ 00:11:05.499 { 00:11:05.499 "dma_device_id": "system", 00:11:05.499 "dma_device_type": 1 00:11:05.499 }, 00:11:05.499 { 00:11:05.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.499 "dma_device_type": 2 00:11:05.499 }, 00:11:05.499 { 00:11:05.499 "dma_device_id": "system", 00:11:05.499 "dma_device_type": 1 00:11:05.499 }, 00:11:05.499 { 00:11:05.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.499 "dma_device_type": 2 00:11:05.499 }, 00:11:05.499 { 00:11:05.499 "dma_device_id": "system", 00:11:05.499 "dma_device_type": 1 00:11:05.499 }, 00:11:05.499 { 00:11:05.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.499 "dma_device_type": 2 00:11:05.499 }, 00:11:05.499 { 00:11:05.499 "dma_device_id": "system", 00:11:05.499 "dma_device_type": 1 00:11:05.499 }, 00:11:05.499 { 00:11:05.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.499 "dma_device_type": 2 00:11:05.499 } 00:11:05.499 ], 00:11:05.499 "driver_specific": { 00:11:05.499 "raid": { 00:11:05.499 "uuid": "00e830e7-bebc-40e8-8167-ef569619604b", 00:11:05.499 "strip_size_kb": 64, 00:11:05.499 "state": "online", 00:11:05.499 "raid_level": "raid0", 00:11:05.499 "superblock": false, 00:11:05.499 "num_base_bdevs": 4, 00:11:05.499 "num_base_bdevs_discovered": 4, 00:11:05.499 "num_base_bdevs_operational": 4, 00:11:05.499 "base_bdevs_list": [ 00:11:05.499 { 00:11:05.499 "name": "BaseBdev1", 00:11:05.499 "uuid": "3dcf2302-c06c-49df-a4fd-0f3479a86cc3", 00:11:05.499 "is_configured": true, 00:11:05.499 "data_offset": 0, 00:11:05.499 "data_size": 65536 00:11:05.499 }, 00:11:05.499 { 00:11:05.499 "name": "BaseBdev2", 00:11:05.499 "uuid": "e2fcd8b4-af72-4222-ac49-7e40bcaf2ddf", 00:11:05.499 "is_configured": true, 00:11:05.499 "data_offset": 0, 00:11:05.499 "data_size": 65536 00:11:05.499 }, 00:11:05.499 { 00:11:05.499 "name": "BaseBdev3", 00:11:05.499 "uuid": "4e601faf-2d36-4fbb-9fd0-60a8397a315d", 00:11:05.499 "is_configured": true, 00:11:05.499 "data_offset": 0, 00:11:05.499 "data_size": 65536 00:11:05.499 }, 00:11:05.499 { 00:11:05.499 "name": "BaseBdev4", 00:11:05.499 "uuid": "df061c7e-0215-4caf-a033-80c6e8687845", 00:11:05.499 "is_configured": true, 00:11:05.499 "data_offset": 0, 00:11:05.499 "data_size": 65536 00:11:05.499 } 00:11:05.499 ] 00:11:05.499 } 00:11:05.499 } 00:11:05.499 }' 00:11:05.499 19:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.499 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:05.499 BaseBdev2 00:11:05.499 BaseBdev3 00:11:05.499 BaseBdev4' 00:11:05.499 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.499 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:05.499 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.499 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:05.499 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.499 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.499 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.499 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.758 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.758 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.758 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.758 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:05.758 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.758 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.758 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.758 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.758 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.758 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.758 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.758 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:05.758 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.758 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.759 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.759 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.759 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.759 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.759 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.759 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:05.759 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.759 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.759 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.759 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.759 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.759 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.759 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:05.759 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.759 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.759 [2024-11-26 19:00:32.287287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:05.759 [2024-11-26 19:00:32.287374] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.759 [2024-11-26 19:00:32.287452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.017 "name": "Existed_Raid", 00:11:06.017 "uuid": "00e830e7-bebc-40e8-8167-ef569619604b", 00:11:06.017 "strip_size_kb": 64, 00:11:06.017 "state": "offline", 00:11:06.017 "raid_level": "raid0", 00:11:06.017 "superblock": false, 00:11:06.017 "num_base_bdevs": 4, 00:11:06.017 "num_base_bdevs_discovered": 3, 00:11:06.017 "num_base_bdevs_operational": 3, 00:11:06.017 "base_bdevs_list": [ 00:11:06.017 { 00:11:06.017 "name": null, 00:11:06.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.017 "is_configured": false, 00:11:06.017 "data_offset": 0, 00:11:06.017 "data_size": 65536 00:11:06.017 }, 00:11:06.017 { 00:11:06.017 "name": "BaseBdev2", 00:11:06.017 "uuid": "e2fcd8b4-af72-4222-ac49-7e40bcaf2ddf", 00:11:06.017 "is_configured": true, 00:11:06.017 "data_offset": 0, 00:11:06.017 "data_size": 65536 00:11:06.017 }, 00:11:06.017 { 00:11:06.017 "name": "BaseBdev3", 00:11:06.017 "uuid": "4e601faf-2d36-4fbb-9fd0-60a8397a315d", 00:11:06.017 "is_configured": true, 00:11:06.017 "data_offset": 0, 00:11:06.017 "data_size": 65536 00:11:06.017 }, 00:11:06.017 { 00:11:06.017 "name": "BaseBdev4", 00:11:06.017 "uuid": "df061c7e-0215-4caf-a033-80c6e8687845", 00:11:06.017 "is_configured": true, 00:11:06.017 "data_offset": 0, 00:11:06.017 "data_size": 65536 00:11:06.017 } 00:11:06.017 ] 00:11:06.017 }' 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.017 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.275 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:06.275 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.275 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:06.275 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.275 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.275 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.533 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.533 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.533 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.533 19:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:06.533 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.533 19:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.533 [2024-11-26 19:00:32.946020] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:06.533 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.533 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.533 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.533 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.533 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.533 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:06.533 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.533 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.533 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.533 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.533 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:06.533 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.533 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.533 [2024-11-26 19:00:33.101714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.792 [2024-11-26 19:00:33.254817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:06.792 [2024-11-26 19:00:33.254890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.792 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.051 BaseBdev2 00:11:07.051 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.051 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:07.051 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:07.051 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.051 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:07.051 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.051 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.051 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.051 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.051 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.051 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.051 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:07.051 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.051 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.051 [ 00:11:07.051 { 00:11:07.051 "name": "BaseBdev2", 00:11:07.051 "aliases": [ 00:11:07.051 "4f6fc65e-2bff-42d1-9bb0-0679080d8a79" 00:11:07.051 ], 00:11:07.051 "product_name": "Malloc disk", 00:11:07.051 "block_size": 512, 00:11:07.051 "num_blocks": 65536, 00:11:07.051 "uuid": "4f6fc65e-2bff-42d1-9bb0-0679080d8a79", 00:11:07.051 "assigned_rate_limits": { 00:11:07.051 "rw_ios_per_sec": 0, 00:11:07.051 "rw_mbytes_per_sec": 0, 00:11:07.051 "r_mbytes_per_sec": 0, 00:11:07.051 "w_mbytes_per_sec": 0 00:11:07.051 }, 00:11:07.051 "claimed": false, 00:11:07.051 "zoned": false, 00:11:07.051 "supported_io_types": { 00:11:07.051 "read": true, 00:11:07.051 "write": true, 00:11:07.051 "unmap": true, 00:11:07.051 "flush": true, 00:11:07.051 "reset": true, 00:11:07.051 "nvme_admin": false, 00:11:07.051 "nvme_io": false, 00:11:07.051 "nvme_io_md": false, 00:11:07.051 "write_zeroes": true, 00:11:07.051 "zcopy": true, 00:11:07.051 "get_zone_info": false, 00:11:07.051 "zone_management": false, 00:11:07.051 "zone_append": false, 00:11:07.051 "compare": false, 00:11:07.051 "compare_and_write": false, 00:11:07.051 "abort": true, 00:11:07.051 "seek_hole": false, 00:11:07.051 "seek_data": false, 00:11:07.051 "copy": true, 00:11:07.051 "nvme_iov_md": false 00:11:07.051 }, 00:11:07.051 "memory_domains": [ 00:11:07.051 { 00:11:07.051 "dma_device_id": "system", 00:11:07.051 "dma_device_type": 1 00:11:07.051 }, 00:11:07.051 { 00:11:07.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.052 "dma_device_type": 2 00:11:07.052 } 00:11:07.052 ], 00:11:07.052 "driver_specific": {} 00:11:07.052 } 00:11:07.052 ] 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.052 BaseBdev3 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.052 [ 00:11:07.052 { 00:11:07.052 "name": "BaseBdev3", 00:11:07.052 "aliases": [ 00:11:07.052 "9828205c-ba38-4dcb-910b-725c3f87deee" 00:11:07.052 ], 00:11:07.052 "product_name": "Malloc disk", 00:11:07.052 "block_size": 512, 00:11:07.052 "num_blocks": 65536, 00:11:07.052 "uuid": "9828205c-ba38-4dcb-910b-725c3f87deee", 00:11:07.052 "assigned_rate_limits": { 00:11:07.052 "rw_ios_per_sec": 0, 00:11:07.052 "rw_mbytes_per_sec": 0, 00:11:07.052 "r_mbytes_per_sec": 0, 00:11:07.052 "w_mbytes_per_sec": 0 00:11:07.052 }, 00:11:07.052 "claimed": false, 00:11:07.052 "zoned": false, 00:11:07.052 "supported_io_types": { 00:11:07.052 "read": true, 00:11:07.052 "write": true, 00:11:07.052 "unmap": true, 00:11:07.052 "flush": true, 00:11:07.052 "reset": true, 00:11:07.052 "nvme_admin": false, 00:11:07.052 "nvme_io": false, 00:11:07.052 "nvme_io_md": false, 00:11:07.052 "write_zeroes": true, 00:11:07.052 "zcopy": true, 00:11:07.052 "get_zone_info": false, 00:11:07.052 "zone_management": false, 00:11:07.052 "zone_append": false, 00:11:07.052 "compare": false, 00:11:07.052 "compare_and_write": false, 00:11:07.052 "abort": true, 00:11:07.052 "seek_hole": false, 00:11:07.052 "seek_data": false, 00:11:07.052 "copy": true, 00:11:07.052 "nvme_iov_md": false 00:11:07.052 }, 00:11:07.052 "memory_domains": [ 00:11:07.052 { 00:11:07.052 "dma_device_id": "system", 00:11:07.052 "dma_device_type": 1 00:11:07.052 }, 00:11:07.052 { 00:11:07.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.052 "dma_device_type": 2 00:11:07.052 } 00:11:07.052 ], 00:11:07.052 "driver_specific": {} 00:11:07.052 } 00:11:07.052 ] 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.052 BaseBdev4 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.052 [ 00:11:07.052 { 00:11:07.052 "name": "BaseBdev4", 00:11:07.052 "aliases": [ 00:11:07.052 "d8ce2a7e-f570-45e2-8b51-eedbe3d0a16c" 00:11:07.052 ], 00:11:07.052 "product_name": "Malloc disk", 00:11:07.052 "block_size": 512, 00:11:07.052 "num_blocks": 65536, 00:11:07.052 "uuid": "d8ce2a7e-f570-45e2-8b51-eedbe3d0a16c", 00:11:07.052 "assigned_rate_limits": { 00:11:07.052 "rw_ios_per_sec": 0, 00:11:07.052 "rw_mbytes_per_sec": 0, 00:11:07.052 "r_mbytes_per_sec": 0, 00:11:07.052 "w_mbytes_per_sec": 0 00:11:07.052 }, 00:11:07.052 "claimed": false, 00:11:07.052 "zoned": false, 00:11:07.052 "supported_io_types": { 00:11:07.052 "read": true, 00:11:07.052 "write": true, 00:11:07.052 "unmap": true, 00:11:07.052 "flush": true, 00:11:07.052 "reset": true, 00:11:07.052 "nvme_admin": false, 00:11:07.052 "nvme_io": false, 00:11:07.052 "nvme_io_md": false, 00:11:07.052 "write_zeroes": true, 00:11:07.052 "zcopy": true, 00:11:07.052 "get_zone_info": false, 00:11:07.052 "zone_management": false, 00:11:07.052 "zone_append": false, 00:11:07.052 "compare": false, 00:11:07.052 "compare_and_write": false, 00:11:07.052 "abort": true, 00:11:07.052 "seek_hole": false, 00:11:07.052 "seek_data": false, 00:11:07.052 "copy": true, 00:11:07.052 "nvme_iov_md": false 00:11:07.052 }, 00:11:07.052 "memory_domains": [ 00:11:07.052 { 00:11:07.052 "dma_device_id": "system", 00:11:07.052 "dma_device_type": 1 00:11:07.052 }, 00:11:07.052 { 00:11:07.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.052 "dma_device_type": 2 00:11:07.052 } 00:11:07.052 ], 00:11:07.052 "driver_specific": {} 00:11:07.052 } 00:11:07.052 ] 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.052 [2024-11-26 19:00:33.654367] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:07.052 [2024-11-26 19:00:33.654425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:07.052 [2024-11-26 19:00:33.654461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:07.052 [2024-11-26 19:00:33.656958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:07.052 [2024-11-26 19:00:33.657029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.052 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.053 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.311 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.311 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.311 "name": "Existed_Raid", 00:11:07.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.311 "strip_size_kb": 64, 00:11:07.311 "state": "configuring", 00:11:07.311 "raid_level": "raid0", 00:11:07.311 "superblock": false, 00:11:07.311 "num_base_bdevs": 4, 00:11:07.311 "num_base_bdevs_discovered": 3, 00:11:07.311 "num_base_bdevs_operational": 4, 00:11:07.311 "base_bdevs_list": [ 00:11:07.311 { 00:11:07.311 "name": "BaseBdev1", 00:11:07.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.311 "is_configured": false, 00:11:07.311 "data_offset": 0, 00:11:07.311 "data_size": 0 00:11:07.311 }, 00:11:07.311 { 00:11:07.311 "name": "BaseBdev2", 00:11:07.311 "uuid": "4f6fc65e-2bff-42d1-9bb0-0679080d8a79", 00:11:07.311 "is_configured": true, 00:11:07.311 "data_offset": 0, 00:11:07.311 "data_size": 65536 00:11:07.311 }, 00:11:07.311 { 00:11:07.311 "name": "BaseBdev3", 00:11:07.311 "uuid": "9828205c-ba38-4dcb-910b-725c3f87deee", 00:11:07.311 "is_configured": true, 00:11:07.311 "data_offset": 0, 00:11:07.311 "data_size": 65536 00:11:07.311 }, 00:11:07.311 { 00:11:07.311 "name": "BaseBdev4", 00:11:07.311 "uuid": "d8ce2a7e-f570-45e2-8b51-eedbe3d0a16c", 00:11:07.311 "is_configured": true, 00:11:07.311 "data_offset": 0, 00:11:07.311 "data_size": 65536 00:11:07.311 } 00:11:07.311 ] 00:11:07.311 }' 00:11:07.311 19:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.311 19:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.569 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:07.569 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.569 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.569 [2024-11-26 19:00:34.138539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:07.569 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.569 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:07.569 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.569 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.569 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.569 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.569 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.569 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.569 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.569 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.569 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.569 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.569 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.569 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.569 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.569 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.827 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.827 "name": "Existed_Raid", 00:11:07.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.827 "strip_size_kb": 64, 00:11:07.827 "state": "configuring", 00:11:07.827 "raid_level": "raid0", 00:11:07.827 "superblock": false, 00:11:07.827 "num_base_bdevs": 4, 00:11:07.827 "num_base_bdevs_discovered": 2, 00:11:07.827 "num_base_bdevs_operational": 4, 00:11:07.827 "base_bdevs_list": [ 00:11:07.827 { 00:11:07.827 "name": "BaseBdev1", 00:11:07.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.827 "is_configured": false, 00:11:07.827 "data_offset": 0, 00:11:07.827 "data_size": 0 00:11:07.827 }, 00:11:07.827 { 00:11:07.827 "name": null, 00:11:07.828 "uuid": "4f6fc65e-2bff-42d1-9bb0-0679080d8a79", 00:11:07.828 "is_configured": false, 00:11:07.828 "data_offset": 0, 00:11:07.828 "data_size": 65536 00:11:07.828 }, 00:11:07.828 { 00:11:07.828 "name": "BaseBdev3", 00:11:07.828 "uuid": "9828205c-ba38-4dcb-910b-725c3f87deee", 00:11:07.828 "is_configured": true, 00:11:07.828 "data_offset": 0, 00:11:07.828 "data_size": 65536 00:11:07.828 }, 00:11:07.828 { 00:11:07.828 "name": "BaseBdev4", 00:11:07.828 "uuid": "d8ce2a7e-f570-45e2-8b51-eedbe3d0a16c", 00:11:07.828 "is_configured": true, 00:11:07.828 "data_offset": 0, 00:11:07.828 "data_size": 65536 00:11:07.828 } 00:11:07.828 ] 00:11:07.828 }' 00:11:07.828 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.828 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.086 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.086 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.086 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:08.086 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.086 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.445 [2024-11-26 19:00:34.753255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:08.445 BaseBdev1 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.445 [ 00:11:08.445 { 00:11:08.445 "name": "BaseBdev1", 00:11:08.445 "aliases": [ 00:11:08.445 "395727b9-d126-4fc3-bf3c-dee4336528b8" 00:11:08.445 ], 00:11:08.445 "product_name": "Malloc disk", 00:11:08.445 "block_size": 512, 00:11:08.445 "num_blocks": 65536, 00:11:08.445 "uuid": "395727b9-d126-4fc3-bf3c-dee4336528b8", 00:11:08.445 "assigned_rate_limits": { 00:11:08.445 "rw_ios_per_sec": 0, 00:11:08.445 "rw_mbytes_per_sec": 0, 00:11:08.445 "r_mbytes_per_sec": 0, 00:11:08.445 "w_mbytes_per_sec": 0 00:11:08.445 }, 00:11:08.445 "claimed": true, 00:11:08.445 "claim_type": "exclusive_write", 00:11:08.445 "zoned": false, 00:11:08.445 "supported_io_types": { 00:11:08.445 "read": true, 00:11:08.445 "write": true, 00:11:08.445 "unmap": true, 00:11:08.445 "flush": true, 00:11:08.445 "reset": true, 00:11:08.445 "nvme_admin": false, 00:11:08.445 "nvme_io": false, 00:11:08.445 "nvme_io_md": false, 00:11:08.445 "write_zeroes": true, 00:11:08.445 "zcopy": true, 00:11:08.445 "get_zone_info": false, 00:11:08.445 "zone_management": false, 00:11:08.445 "zone_append": false, 00:11:08.445 "compare": false, 00:11:08.445 "compare_and_write": false, 00:11:08.445 "abort": true, 00:11:08.445 "seek_hole": false, 00:11:08.445 "seek_data": false, 00:11:08.445 "copy": true, 00:11:08.445 "nvme_iov_md": false 00:11:08.445 }, 00:11:08.445 "memory_domains": [ 00:11:08.445 { 00:11:08.445 "dma_device_id": "system", 00:11:08.445 "dma_device_type": 1 00:11:08.445 }, 00:11:08.445 { 00:11:08.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.445 "dma_device_type": 2 00:11:08.445 } 00:11:08.445 ], 00:11:08.445 "driver_specific": {} 00:11:08.445 } 00:11:08.445 ] 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.445 "name": "Existed_Raid", 00:11:08.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.445 "strip_size_kb": 64, 00:11:08.445 "state": "configuring", 00:11:08.445 "raid_level": "raid0", 00:11:08.445 "superblock": false, 00:11:08.445 "num_base_bdevs": 4, 00:11:08.445 "num_base_bdevs_discovered": 3, 00:11:08.445 "num_base_bdevs_operational": 4, 00:11:08.445 "base_bdevs_list": [ 00:11:08.445 { 00:11:08.445 "name": "BaseBdev1", 00:11:08.445 "uuid": "395727b9-d126-4fc3-bf3c-dee4336528b8", 00:11:08.445 "is_configured": true, 00:11:08.445 "data_offset": 0, 00:11:08.445 "data_size": 65536 00:11:08.445 }, 00:11:08.445 { 00:11:08.445 "name": null, 00:11:08.445 "uuid": "4f6fc65e-2bff-42d1-9bb0-0679080d8a79", 00:11:08.445 "is_configured": false, 00:11:08.445 "data_offset": 0, 00:11:08.445 "data_size": 65536 00:11:08.445 }, 00:11:08.445 { 00:11:08.445 "name": "BaseBdev3", 00:11:08.445 "uuid": "9828205c-ba38-4dcb-910b-725c3f87deee", 00:11:08.445 "is_configured": true, 00:11:08.445 "data_offset": 0, 00:11:08.445 "data_size": 65536 00:11:08.445 }, 00:11:08.445 { 00:11:08.445 "name": "BaseBdev4", 00:11:08.445 "uuid": "d8ce2a7e-f570-45e2-8b51-eedbe3d0a16c", 00:11:08.445 "is_configured": true, 00:11:08.445 "data_offset": 0, 00:11:08.445 "data_size": 65536 00:11:08.445 } 00:11:08.445 ] 00:11:08.445 }' 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.445 19:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.705 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.705 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:08.705 19:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.705 19:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.705 19:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.963 [2024-11-26 19:00:35.341497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.963 "name": "Existed_Raid", 00:11:08.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.963 "strip_size_kb": 64, 00:11:08.963 "state": "configuring", 00:11:08.963 "raid_level": "raid0", 00:11:08.963 "superblock": false, 00:11:08.963 "num_base_bdevs": 4, 00:11:08.963 "num_base_bdevs_discovered": 2, 00:11:08.963 "num_base_bdevs_operational": 4, 00:11:08.963 "base_bdevs_list": [ 00:11:08.963 { 00:11:08.963 "name": "BaseBdev1", 00:11:08.963 "uuid": "395727b9-d126-4fc3-bf3c-dee4336528b8", 00:11:08.963 "is_configured": true, 00:11:08.963 "data_offset": 0, 00:11:08.963 "data_size": 65536 00:11:08.963 }, 00:11:08.963 { 00:11:08.963 "name": null, 00:11:08.963 "uuid": "4f6fc65e-2bff-42d1-9bb0-0679080d8a79", 00:11:08.963 "is_configured": false, 00:11:08.963 "data_offset": 0, 00:11:08.963 "data_size": 65536 00:11:08.963 }, 00:11:08.963 { 00:11:08.963 "name": null, 00:11:08.963 "uuid": "9828205c-ba38-4dcb-910b-725c3f87deee", 00:11:08.963 "is_configured": false, 00:11:08.963 "data_offset": 0, 00:11:08.963 "data_size": 65536 00:11:08.963 }, 00:11:08.963 { 00:11:08.963 "name": "BaseBdev4", 00:11:08.963 "uuid": "d8ce2a7e-f570-45e2-8b51-eedbe3d0a16c", 00:11:08.963 "is_configured": true, 00:11:08.963 "data_offset": 0, 00:11:08.963 "data_size": 65536 00:11:08.963 } 00:11:08.963 ] 00:11:08.963 }' 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.963 19:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.530 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.530 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:09.530 19:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.530 19:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.530 19:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.530 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:09.530 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:09.530 19:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.530 19:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.530 [2024-11-26 19:00:35.905709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.530 19:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.531 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:09.531 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.531 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.531 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.531 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.531 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.531 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.531 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.531 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.531 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.531 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.531 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.531 19:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.531 19:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.531 19:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.531 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.531 "name": "Existed_Raid", 00:11:09.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.531 "strip_size_kb": 64, 00:11:09.531 "state": "configuring", 00:11:09.531 "raid_level": "raid0", 00:11:09.531 "superblock": false, 00:11:09.531 "num_base_bdevs": 4, 00:11:09.531 "num_base_bdevs_discovered": 3, 00:11:09.531 "num_base_bdevs_operational": 4, 00:11:09.531 "base_bdevs_list": [ 00:11:09.531 { 00:11:09.531 "name": "BaseBdev1", 00:11:09.531 "uuid": "395727b9-d126-4fc3-bf3c-dee4336528b8", 00:11:09.531 "is_configured": true, 00:11:09.531 "data_offset": 0, 00:11:09.531 "data_size": 65536 00:11:09.531 }, 00:11:09.531 { 00:11:09.531 "name": null, 00:11:09.531 "uuid": "4f6fc65e-2bff-42d1-9bb0-0679080d8a79", 00:11:09.531 "is_configured": false, 00:11:09.531 "data_offset": 0, 00:11:09.531 "data_size": 65536 00:11:09.531 }, 00:11:09.531 { 00:11:09.531 "name": "BaseBdev3", 00:11:09.531 "uuid": "9828205c-ba38-4dcb-910b-725c3f87deee", 00:11:09.531 "is_configured": true, 00:11:09.531 "data_offset": 0, 00:11:09.531 "data_size": 65536 00:11:09.531 }, 00:11:09.531 { 00:11:09.531 "name": "BaseBdev4", 00:11:09.531 "uuid": "d8ce2a7e-f570-45e2-8b51-eedbe3d0a16c", 00:11:09.531 "is_configured": true, 00:11:09.531 "data_offset": 0, 00:11:09.531 "data_size": 65536 00:11:09.531 } 00:11:09.531 ] 00:11:09.531 }' 00:11:09.531 19:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.531 19:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.098 [2024-11-26 19:00:36.477949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.098 "name": "Existed_Raid", 00:11:10.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.098 "strip_size_kb": 64, 00:11:10.098 "state": "configuring", 00:11:10.098 "raid_level": "raid0", 00:11:10.098 "superblock": false, 00:11:10.098 "num_base_bdevs": 4, 00:11:10.098 "num_base_bdevs_discovered": 2, 00:11:10.098 "num_base_bdevs_operational": 4, 00:11:10.098 "base_bdevs_list": [ 00:11:10.098 { 00:11:10.098 "name": null, 00:11:10.098 "uuid": "395727b9-d126-4fc3-bf3c-dee4336528b8", 00:11:10.098 "is_configured": false, 00:11:10.098 "data_offset": 0, 00:11:10.098 "data_size": 65536 00:11:10.098 }, 00:11:10.098 { 00:11:10.098 "name": null, 00:11:10.098 "uuid": "4f6fc65e-2bff-42d1-9bb0-0679080d8a79", 00:11:10.098 "is_configured": false, 00:11:10.098 "data_offset": 0, 00:11:10.098 "data_size": 65536 00:11:10.098 }, 00:11:10.098 { 00:11:10.098 "name": "BaseBdev3", 00:11:10.098 "uuid": "9828205c-ba38-4dcb-910b-725c3f87deee", 00:11:10.098 "is_configured": true, 00:11:10.098 "data_offset": 0, 00:11:10.098 "data_size": 65536 00:11:10.098 }, 00:11:10.098 { 00:11:10.098 "name": "BaseBdev4", 00:11:10.098 "uuid": "d8ce2a7e-f570-45e2-8b51-eedbe3d0a16c", 00:11:10.098 "is_configured": true, 00:11:10.098 "data_offset": 0, 00:11:10.098 "data_size": 65536 00:11:10.098 } 00:11:10.098 ] 00:11:10.098 }' 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.098 19:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.663 [2024-11-26 19:00:37.121544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.663 "name": "Existed_Raid", 00:11:10.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.663 "strip_size_kb": 64, 00:11:10.663 "state": "configuring", 00:11:10.663 "raid_level": "raid0", 00:11:10.663 "superblock": false, 00:11:10.663 "num_base_bdevs": 4, 00:11:10.663 "num_base_bdevs_discovered": 3, 00:11:10.663 "num_base_bdevs_operational": 4, 00:11:10.663 "base_bdevs_list": [ 00:11:10.663 { 00:11:10.663 "name": null, 00:11:10.663 "uuid": "395727b9-d126-4fc3-bf3c-dee4336528b8", 00:11:10.663 "is_configured": false, 00:11:10.663 "data_offset": 0, 00:11:10.663 "data_size": 65536 00:11:10.663 }, 00:11:10.663 { 00:11:10.663 "name": "BaseBdev2", 00:11:10.663 "uuid": "4f6fc65e-2bff-42d1-9bb0-0679080d8a79", 00:11:10.663 "is_configured": true, 00:11:10.663 "data_offset": 0, 00:11:10.663 "data_size": 65536 00:11:10.663 }, 00:11:10.663 { 00:11:10.663 "name": "BaseBdev3", 00:11:10.663 "uuid": "9828205c-ba38-4dcb-910b-725c3f87deee", 00:11:10.663 "is_configured": true, 00:11:10.663 "data_offset": 0, 00:11:10.663 "data_size": 65536 00:11:10.663 }, 00:11:10.663 { 00:11:10.663 "name": "BaseBdev4", 00:11:10.663 "uuid": "d8ce2a7e-f570-45e2-8b51-eedbe3d0a16c", 00:11:10.663 "is_configured": true, 00:11:10.663 "data_offset": 0, 00:11:10.663 "data_size": 65536 00:11:10.663 } 00:11:10.663 ] 00:11:10.663 }' 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.663 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.227 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.227 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.227 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:11.227 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.227 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.227 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:11.227 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:11.227 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.227 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.227 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.227 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.227 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 395727b9-d126-4fc3-bf3c-dee4336528b8 00:11:11.227 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.227 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.227 [2024-11-26 19:00:37.776337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:11.227 [2024-11-26 19:00:37.776407] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:11.227 [2024-11-26 19:00:37.776419] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:11.227 [2024-11-26 19:00:37.776761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:11.227 [2024-11-26 19:00:37.776950] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:11.227 [2024-11-26 19:00:37.776971] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:11.227 [2024-11-26 19:00:37.777348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.227 NewBaseBdev 00:11:11.227 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.227 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:11.227 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:11.227 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:11.227 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.228 [ 00:11:11.228 { 00:11:11.228 "name": "NewBaseBdev", 00:11:11.228 "aliases": [ 00:11:11.228 "395727b9-d126-4fc3-bf3c-dee4336528b8" 00:11:11.228 ], 00:11:11.228 "product_name": "Malloc disk", 00:11:11.228 "block_size": 512, 00:11:11.228 "num_blocks": 65536, 00:11:11.228 "uuid": "395727b9-d126-4fc3-bf3c-dee4336528b8", 00:11:11.228 "assigned_rate_limits": { 00:11:11.228 "rw_ios_per_sec": 0, 00:11:11.228 "rw_mbytes_per_sec": 0, 00:11:11.228 "r_mbytes_per_sec": 0, 00:11:11.228 "w_mbytes_per_sec": 0 00:11:11.228 }, 00:11:11.228 "claimed": true, 00:11:11.228 "claim_type": "exclusive_write", 00:11:11.228 "zoned": false, 00:11:11.228 "supported_io_types": { 00:11:11.228 "read": true, 00:11:11.228 "write": true, 00:11:11.228 "unmap": true, 00:11:11.228 "flush": true, 00:11:11.228 "reset": true, 00:11:11.228 "nvme_admin": false, 00:11:11.228 "nvme_io": false, 00:11:11.228 "nvme_io_md": false, 00:11:11.228 "write_zeroes": true, 00:11:11.228 "zcopy": true, 00:11:11.228 "get_zone_info": false, 00:11:11.228 "zone_management": false, 00:11:11.228 "zone_append": false, 00:11:11.228 "compare": false, 00:11:11.228 "compare_and_write": false, 00:11:11.228 "abort": true, 00:11:11.228 "seek_hole": false, 00:11:11.228 "seek_data": false, 00:11:11.228 "copy": true, 00:11:11.228 "nvme_iov_md": false 00:11:11.228 }, 00:11:11.228 "memory_domains": [ 00:11:11.228 { 00:11:11.228 "dma_device_id": "system", 00:11:11.228 "dma_device_type": 1 00:11:11.228 }, 00:11:11.228 { 00:11:11.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.228 "dma_device_type": 2 00:11:11.228 } 00:11:11.228 ], 00:11:11.228 "driver_specific": {} 00:11:11.228 } 00:11:11.228 ] 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.228 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.485 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.486 "name": "Existed_Raid", 00:11:11.486 "uuid": "99ba01bf-cdf9-4393-8841-e51085a8a982", 00:11:11.486 "strip_size_kb": 64, 00:11:11.486 "state": "online", 00:11:11.486 "raid_level": "raid0", 00:11:11.486 "superblock": false, 00:11:11.486 "num_base_bdevs": 4, 00:11:11.486 "num_base_bdevs_discovered": 4, 00:11:11.486 "num_base_bdevs_operational": 4, 00:11:11.486 "base_bdevs_list": [ 00:11:11.486 { 00:11:11.486 "name": "NewBaseBdev", 00:11:11.486 "uuid": "395727b9-d126-4fc3-bf3c-dee4336528b8", 00:11:11.486 "is_configured": true, 00:11:11.486 "data_offset": 0, 00:11:11.486 "data_size": 65536 00:11:11.486 }, 00:11:11.486 { 00:11:11.486 "name": "BaseBdev2", 00:11:11.486 "uuid": "4f6fc65e-2bff-42d1-9bb0-0679080d8a79", 00:11:11.486 "is_configured": true, 00:11:11.486 "data_offset": 0, 00:11:11.486 "data_size": 65536 00:11:11.486 }, 00:11:11.486 { 00:11:11.486 "name": "BaseBdev3", 00:11:11.486 "uuid": "9828205c-ba38-4dcb-910b-725c3f87deee", 00:11:11.486 "is_configured": true, 00:11:11.486 "data_offset": 0, 00:11:11.486 "data_size": 65536 00:11:11.486 }, 00:11:11.486 { 00:11:11.486 "name": "BaseBdev4", 00:11:11.486 "uuid": "d8ce2a7e-f570-45e2-8b51-eedbe3d0a16c", 00:11:11.486 "is_configured": true, 00:11:11.486 "data_offset": 0, 00:11:11.486 "data_size": 65536 00:11:11.486 } 00:11:11.486 ] 00:11:11.486 }' 00:11:11.486 19:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.486 19:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.743 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:11.743 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:11.743 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:11.743 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:11.743 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:11.743 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:11.743 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:11.743 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:11.743 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.743 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.743 [2024-11-26 19:00:38.329027] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.743 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:12.002 "name": "Existed_Raid", 00:11:12.002 "aliases": [ 00:11:12.002 "99ba01bf-cdf9-4393-8841-e51085a8a982" 00:11:12.002 ], 00:11:12.002 "product_name": "Raid Volume", 00:11:12.002 "block_size": 512, 00:11:12.002 "num_blocks": 262144, 00:11:12.002 "uuid": "99ba01bf-cdf9-4393-8841-e51085a8a982", 00:11:12.002 "assigned_rate_limits": { 00:11:12.002 "rw_ios_per_sec": 0, 00:11:12.002 "rw_mbytes_per_sec": 0, 00:11:12.002 "r_mbytes_per_sec": 0, 00:11:12.002 "w_mbytes_per_sec": 0 00:11:12.002 }, 00:11:12.002 "claimed": false, 00:11:12.002 "zoned": false, 00:11:12.002 "supported_io_types": { 00:11:12.002 "read": true, 00:11:12.002 "write": true, 00:11:12.002 "unmap": true, 00:11:12.002 "flush": true, 00:11:12.002 "reset": true, 00:11:12.002 "nvme_admin": false, 00:11:12.002 "nvme_io": false, 00:11:12.002 "nvme_io_md": false, 00:11:12.002 "write_zeroes": true, 00:11:12.002 "zcopy": false, 00:11:12.002 "get_zone_info": false, 00:11:12.002 "zone_management": false, 00:11:12.002 "zone_append": false, 00:11:12.002 "compare": false, 00:11:12.002 "compare_and_write": false, 00:11:12.002 "abort": false, 00:11:12.002 "seek_hole": false, 00:11:12.002 "seek_data": false, 00:11:12.002 "copy": false, 00:11:12.002 "nvme_iov_md": false 00:11:12.002 }, 00:11:12.002 "memory_domains": [ 00:11:12.002 { 00:11:12.002 "dma_device_id": "system", 00:11:12.002 "dma_device_type": 1 00:11:12.002 }, 00:11:12.002 { 00:11:12.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.002 "dma_device_type": 2 00:11:12.002 }, 00:11:12.002 { 00:11:12.002 "dma_device_id": "system", 00:11:12.002 "dma_device_type": 1 00:11:12.002 }, 00:11:12.002 { 00:11:12.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.002 "dma_device_type": 2 00:11:12.002 }, 00:11:12.002 { 00:11:12.002 "dma_device_id": "system", 00:11:12.002 "dma_device_type": 1 00:11:12.002 }, 00:11:12.002 { 00:11:12.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.002 "dma_device_type": 2 00:11:12.002 }, 00:11:12.002 { 00:11:12.002 "dma_device_id": "system", 00:11:12.002 "dma_device_type": 1 00:11:12.002 }, 00:11:12.002 { 00:11:12.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.002 "dma_device_type": 2 00:11:12.002 } 00:11:12.002 ], 00:11:12.002 "driver_specific": { 00:11:12.002 "raid": { 00:11:12.002 "uuid": "99ba01bf-cdf9-4393-8841-e51085a8a982", 00:11:12.002 "strip_size_kb": 64, 00:11:12.002 "state": "online", 00:11:12.002 "raid_level": "raid0", 00:11:12.002 "superblock": false, 00:11:12.002 "num_base_bdevs": 4, 00:11:12.002 "num_base_bdevs_discovered": 4, 00:11:12.002 "num_base_bdevs_operational": 4, 00:11:12.002 "base_bdevs_list": [ 00:11:12.002 { 00:11:12.002 "name": "NewBaseBdev", 00:11:12.002 "uuid": "395727b9-d126-4fc3-bf3c-dee4336528b8", 00:11:12.002 "is_configured": true, 00:11:12.002 "data_offset": 0, 00:11:12.002 "data_size": 65536 00:11:12.002 }, 00:11:12.002 { 00:11:12.002 "name": "BaseBdev2", 00:11:12.002 "uuid": "4f6fc65e-2bff-42d1-9bb0-0679080d8a79", 00:11:12.002 "is_configured": true, 00:11:12.002 "data_offset": 0, 00:11:12.002 "data_size": 65536 00:11:12.002 }, 00:11:12.002 { 00:11:12.002 "name": "BaseBdev3", 00:11:12.002 "uuid": "9828205c-ba38-4dcb-910b-725c3f87deee", 00:11:12.002 "is_configured": true, 00:11:12.002 "data_offset": 0, 00:11:12.002 "data_size": 65536 00:11:12.002 }, 00:11:12.002 { 00:11:12.002 "name": "BaseBdev4", 00:11:12.002 "uuid": "d8ce2a7e-f570-45e2-8b51-eedbe3d0a16c", 00:11:12.002 "is_configured": true, 00:11:12.002 "data_offset": 0, 00:11:12.002 "data_size": 65536 00:11:12.002 } 00:11:12.002 ] 00:11:12.002 } 00:11:12.002 } 00:11:12.002 }' 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:12.002 BaseBdev2 00:11:12.002 BaseBdev3 00:11:12.002 BaseBdev4' 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.002 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.260 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.260 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.260 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.260 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:12.260 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.260 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.260 [2024-11-26 19:00:38.688622] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:12.260 [2024-11-26 19:00:38.688666] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.260 [2024-11-26 19:00:38.688784] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.260 [2024-11-26 19:00:38.688890] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.260 [2024-11-26 19:00:38.688908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:12.260 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.260 19:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69830 00:11:12.260 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69830 ']' 00:11:12.260 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69830 00:11:12.260 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:12.260 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.260 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69830 00:11:12.260 killing process with pid 69830 00:11:12.260 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:12.260 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:12.260 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69830' 00:11:12.260 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69830 00:11:12.260 [2024-11-26 19:00:38.725090] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:12.260 19:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69830 00:11:12.518 [2024-11-26 19:00:39.128519] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:13.887 00:11:13.887 real 0m13.119s 00:11:13.887 user 0m21.480s 00:11:13.887 sys 0m1.884s 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.887 ************************************ 00:11:13.887 END TEST raid_state_function_test 00:11:13.887 ************************************ 00:11:13.887 19:00:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:11:13.887 19:00:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:13.887 19:00:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.887 19:00:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:13.887 ************************************ 00:11:13.887 START TEST raid_state_function_test_sb 00:11:13.887 ************************************ 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:13.887 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:13.888 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:13.888 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:13.888 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:13.888 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:13.888 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:13.888 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:13.888 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:13.888 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:13.888 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:13.888 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:13.888 Process raid pid: 70518 00:11:13.888 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70518 00:11:13.888 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:13.888 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70518' 00:11:13.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.888 19:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70518 00:11:13.888 19:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70518 ']' 00:11:13.888 19:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.888 19:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.888 19:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.888 19:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.888 19:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.145 [2024-11-26 19:00:40.520052] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:11:14.145 [2024-11-26 19:00:40.520448] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.145 [2024-11-26 19:00:40.709462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.404 [2024-11-26 19:00:40.895313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.662 [2024-11-26 19:00:41.142519] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.662 [2024-11-26 19:00:41.142592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.230 19:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:15.230 19:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:15.230 19:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:15.230 19:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.230 19:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.230 [2024-11-26 19:00:41.547045] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:15.230 [2024-11-26 19:00:41.547109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:15.230 [2024-11-26 19:00:41.547127] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:15.230 [2024-11-26 19:00:41.547143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:15.230 [2024-11-26 19:00:41.547154] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:15.230 [2024-11-26 19:00:41.547168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:15.230 [2024-11-26 19:00:41.547177] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:15.230 [2024-11-26 19:00:41.547192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:15.230 19:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.230 19:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:15.230 19:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.230 19:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.230 19:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:15.230 19:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.230 19:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.230 19:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.230 19:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.230 19:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.230 19:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.230 19:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.230 19:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.230 19:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.230 19:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.230 19:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.230 19:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.230 "name": "Existed_Raid", 00:11:15.230 "uuid": "51349118-8d4f-4e9a-bd9e-33aaabb3f753", 00:11:15.230 "strip_size_kb": 64, 00:11:15.230 "state": "configuring", 00:11:15.230 "raid_level": "raid0", 00:11:15.230 "superblock": true, 00:11:15.230 "num_base_bdevs": 4, 00:11:15.230 "num_base_bdevs_discovered": 0, 00:11:15.230 "num_base_bdevs_operational": 4, 00:11:15.230 "base_bdevs_list": [ 00:11:15.230 { 00:11:15.230 "name": "BaseBdev1", 00:11:15.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.230 "is_configured": false, 00:11:15.230 "data_offset": 0, 00:11:15.230 "data_size": 0 00:11:15.230 }, 00:11:15.230 { 00:11:15.231 "name": "BaseBdev2", 00:11:15.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.231 "is_configured": false, 00:11:15.231 "data_offset": 0, 00:11:15.231 "data_size": 0 00:11:15.231 }, 00:11:15.231 { 00:11:15.231 "name": "BaseBdev3", 00:11:15.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.231 "is_configured": false, 00:11:15.231 "data_offset": 0, 00:11:15.231 "data_size": 0 00:11:15.231 }, 00:11:15.231 { 00:11:15.231 "name": "BaseBdev4", 00:11:15.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.231 "is_configured": false, 00:11:15.231 "data_offset": 0, 00:11:15.231 "data_size": 0 00:11:15.231 } 00:11:15.231 ] 00:11:15.231 }' 00:11:15.231 19:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.231 19:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.491 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:15.491 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.491 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.491 [2024-11-26 19:00:42.071189] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:15.491 [2024-11-26 19:00:42.071402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:15.491 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.491 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:15.491 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.491 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.491 [2024-11-26 19:00:42.079164] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:15.491 [2024-11-26 19:00:42.079364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:15.491 [2024-11-26 19:00:42.079490] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:15.491 [2024-11-26 19:00:42.079560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:15.491 [2024-11-26 19:00:42.079797] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:15.491 [2024-11-26 19:00:42.079860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:15.491 [2024-11-26 19:00:42.079973] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:15.491 [2024-11-26 19:00:42.080032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:15.491 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.491 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:15.491 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.491 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.750 [2024-11-26 19:00:42.129666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.750 BaseBdev1 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.750 [ 00:11:15.750 { 00:11:15.750 "name": "BaseBdev1", 00:11:15.750 "aliases": [ 00:11:15.750 "bb079add-3ba1-44f0-b768-6fd5a8b5af87" 00:11:15.750 ], 00:11:15.750 "product_name": "Malloc disk", 00:11:15.750 "block_size": 512, 00:11:15.750 "num_blocks": 65536, 00:11:15.750 "uuid": "bb079add-3ba1-44f0-b768-6fd5a8b5af87", 00:11:15.750 "assigned_rate_limits": { 00:11:15.750 "rw_ios_per_sec": 0, 00:11:15.750 "rw_mbytes_per_sec": 0, 00:11:15.750 "r_mbytes_per_sec": 0, 00:11:15.750 "w_mbytes_per_sec": 0 00:11:15.750 }, 00:11:15.750 "claimed": true, 00:11:15.750 "claim_type": "exclusive_write", 00:11:15.750 "zoned": false, 00:11:15.750 "supported_io_types": { 00:11:15.750 "read": true, 00:11:15.750 "write": true, 00:11:15.750 "unmap": true, 00:11:15.750 "flush": true, 00:11:15.750 "reset": true, 00:11:15.750 "nvme_admin": false, 00:11:15.750 "nvme_io": false, 00:11:15.750 "nvme_io_md": false, 00:11:15.750 "write_zeroes": true, 00:11:15.750 "zcopy": true, 00:11:15.750 "get_zone_info": false, 00:11:15.750 "zone_management": false, 00:11:15.750 "zone_append": false, 00:11:15.750 "compare": false, 00:11:15.750 "compare_and_write": false, 00:11:15.750 "abort": true, 00:11:15.750 "seek_hole": false, 00:11:15.750 "seek_data": false, 00:11:15.750 "copy": true, 00:11:15.750 "nvme_iov_md": false 00:11:15.750 }, 00:11:15.750 "memory_domains": [ 00:11:15.750 { 00:11:15.750 "dma_device_id": "system", 00:11:15.750 "dma_device_type": 1 00:11:15.750 }, 00:11:15.750 { 00:11:15.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.750 "dma_device_type": 2 00:11:15.750 } 00:11:15.750 ], 00:11:15.750 "driver_specific": {} 00:11:15.750 } 00:11:15.750 ] 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.750 "name": "Existed_Raid", 00:11:15.750 "uuid": "cd031dbd-9266-4622-a99c-f367431cba48", 00:11:15.750 "strip_size_kb": 64, 00:11:15.750 "state": "configuring", 00:11:15.750 "raid_level": "raid0", 00:11:15.750 "superblock": true, 00:11:15.750 "num_base_bdevs": 4, 00:11:15.750 "num_base_bdevs_discovered": 1, 00:11:15.750 "num_base_bdevs_operational": 4, 00:11:15.750 "base_bdevs_list": [ 00:11:15.750 { 00:11:15.750 "name": "BaseBdev1", 00:11:15.750 "uuid": "bb079add-3ba1-44f0-b768-6fd5a8b5af87", 00:11:15.750 "is_configured": true, 00:11:15.750 "data_offset": 2048, 00:11:15.750 "data_size": 63488 00:11:15.750 }, 00:11:15.750 { 00:11:15.750 "name": "BaseBdev2", 00:11:15.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.750 "is_configured": false, 00:11:15.750 "data_offset": 0, 00:11:15.750 "data_size": 0 00:11:15.750 }, 00:11:15.750 { 00:11:15.750 "name": "BaseBdev3", 00:11:15.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.750 "is_configured": false, 00:11:15.750 "data_offset": 0, 00:11:15.750 "data_size": 0 00:11:15.750 }, 00:11:15.750 { 00:11:15.750 "name": "BaseBdev4", 00:11:15.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.750 "is_configured": false, 00:11:15.750 "data_offset": 0, 00:11:15.750 "data_size": 0 00:11:15.750 } 00:11:15.750 ] 00:11:15.750 }' 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.750 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.315 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:16.315 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.315 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.315 [2024-11-26 19:00:42.681895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:16.315 [2024-11-26 19:00:42.681985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:16.315 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.315 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:16.315 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.315 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.315 [2024-11-26 19:00:42.693974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:16.316 [2024-11-26 19:00:42.696795] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:16.316 [2024-11-26 19:00:42.696976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:16.316 [2024-11-26 19:00:42.697185] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:16.316 [2024-11-26 19:00:42.697263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:16.316 [2024-11-26 19:00:42.697484] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:16.316 [2024-11-26 19:00:42.697543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:16.316 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.316 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:16.316 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.316 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:16.316 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.316 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.316 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:16.316 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.316 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.316 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.316 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.316 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.316 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.316 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.316 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.316 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.316 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.316 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.316 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.316 "name": "Existed_Raid", 00:11:16.316 "uuid": "94706882-309d-4833-af18-421715905d4f", 00:11:16.316 "strip_size_kb": 64, 00:11:16.316 "state": "configuring", 00:11:16.316 "raid_level": "raid0", 00:11:16.316 "superblock": true, 00:11:16.316 "num_base_bdevs": 4, 00:11:16.316 "num_base_bdevs_discovered": 1, 00:11:16.316 "num_base_bdevs_operational": 4, 00:11:16.316 "base_bdevs_list": [ 00:11:16.316 { 00:11:16.316 "name": "BaseBdev1", 00:11:16.316 "uuid": "bb079add-3ba1-44f0-b768-6fd5a8b5af87", 00:11:16.316 "is_configured": true, 00:11:16.316 "data_offset": 2048, 00:11:16.316 "data_size": 63488 00:11:16.316 }, 00:11:16.316 { 00:11:16.316 "name": "BaseBdev2", 00:11:16.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.316 "is_configured": false, 00:11:16.316 "data_offset": 0, 00:11:16.316 "data_size": 0 00:11:16.316 }, 00:11:16.316 { 00:11:16.316 "name": "BaseBdev3", 00:11:16.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.316 "is_configured": false, 00:11:16.316 "data_offset": 0, 00:11:16.316 "data_size": 0 00:11:16.316 }, 00:11:16.316 { 00:11:16.316 "name": "BaseBdev4", 00:11:16.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.316 "is_configured": false, 00:11:16.316 "data_offset": 0, 00:11:16.316 "data_size": 0 00:11:16.316 } 00:11:16.316 ] 00:11:16.316 }' 00:11:16.316 19:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.316 19:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.574 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:16.574 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.574 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.833 [2024-11-26 19:00:43.225237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:16.833 BaseBdev2 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.833 [ 00:11:16.833 { 00:11:16.833 "name": "BaseBdev2", 00:11:16.833 "aliases": [ 00:11:16.833 "3af69e1d-f59e-453b-bd3a-bd4be2833232" 00:11:16.833 ], 00:11:16.833 "product_name": "Malloc disk", 00:11:16.833 "block_size": 512, 00:11:16.833 "num_blocks": 65536, 00:11:16.833 "uuid": "3af69e1d-f59e-453b-bd3a-bd4be2833232", 00:11:16.833 "assigned_rate_limits": { 00:11:16.833 "rw_ios_per_sec": 0, 00:11:16.833 "rw_mbytes_per_sec": 0, 00:11:16.833 "r_mbytes_per_sec": 0, 00:11:16.833 "w_mbytes_per_sec": 0 00:11:16.833 }, 00:11:16.833 "claimed": true, 00:11:16.833 "claim_type": "exclusive_write", 00:11:16.833 "zoned": false, 00:11:16.833 "supported_io_types": { 00:11:16.833 "read": true, 00:11:16.833 "write": true, 00:11:16.833 "unmap": true, 00:11:16.833 "flush": true, 00:11:16.833 "reset": true, 00:11:16.833 "nvme_admin": false, 00:11:16.833 "nvme_io": false, 00:11:16.833 "nvme_io_md": false, 00:11:16.833 "write_zeroes": true, 00:11:16.833 "zcopy": true, 00:11:16.833 "get_zone_info": false, 00:11:16.833 "zone_management": false, 00:11:16.833 "zone_append": false, 00:11:16.833 "compare": false, 00:11:16.833 "compare_and_write": false, 00:11:16.833 "abort": true, 00:11:16.833 "seek_hole": false, 00:11:16.833 "seek_data": false, 00:11:16.833 "copy": true, 00:11:16.833 "nvme_iov_md": false 00:11:16.833 }, 00:11:16.833 "memory_domains": [ 00:11:16.833 { 00:11:16.833 "dma_device_id": "system", 00:11:16.833 "dma_device_type": 1 00:11:16.833 }, 00:11:16.833 { 00:11:16.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.833 "dma_device_type": 2 00:11:16.833 } 00:11:16.833 ], 00:11:16.833 "driver_specific": {} 00:11:16.833 } 00:11:16.833 ] 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.833 "name": "Existed_Raid", 00:11:16.833 "uuid": "94706882-309d-4833-af18-421715905d4f", 00:11:16.833 "strip_size_kb": 64, 00:11:16.833 "state": "configuring", 00:11:16.833 "raid_level": "raid0", 00:11:16.833 "superblock": true, 00:11:16.833 "num_base_bdevs": 4, 00:11:16.833 "num_base_bdevs_discovered": 2, 00:11:16.833 "num_base_bdevs_operational": 4, 00:11:16.833 "base_bdevs_list": [ 00:11:16.833 { 00:11:16.833 "name": "BaseBdev1", 00:11:16.833 "uuid": "bb079add-3ba1-44f0-b768-6fd5a8b5af87", 00:11:16.833 "is_configured": true, 00:11:16.833 "data_offset": 2048, 00:11:16.833 "data_size": 63488 00:11:16.833 }, 00:11:16.833 { 00:11:16.833 "name": "BaseBdev2", 00:11:16.833 "uuid": "3af69e1d-f59e-453b-bd3a-bd4be2833232", 00:11:16.833 "is_configured": true, 00:11:16.833 "data_offset": 2048, 00:11:16.833 "data_size": 63488 00:11:16.833 }, 00:11:16.833 { 00:11:16.833 "name": "BaseBdev3", 00:11:16.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.833 "is_configured": false, 00:11:16.833 "data_offset": 0, 00:11:16.833 "data_size": 0 00:11:16.833 }, 00:11:16.833 { 00:11:16.833 "name": "BaseBdev4", 00:11:16.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.833 "is_configured": false, 00:11:16.833 "data_offset": 0, 00:11:16.833 "data_size": 0 00:11:16.833 } 00:11:16.833 ] 00:11:16.833 }' 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.833 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.399 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:17.399 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.399 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.399 [2024-11-26 19:00:43.856675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.399 BaseBdev3 00:11:17.399 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.399 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:17.399 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:17.399 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.399 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.400 [ 00:11:17.400 { 00:11:17.400 "name": "BaseBdev3", 00:11:17.400 "aliases": [ 00:11:17.400 "b70f12e3-f7a5-4e4f-82bb-12d6575351b9" 00:11:17.400 ], 00:11:17.400 "product_name": "Malloc disk", 00:11:17.400 "block_size": 512, 00:11:17.400 "num_blocks": 65536, 00:11:17.400 "uuid": "b70f12e3-f7a5-4e4f-82bb-12d6575351b9", 00:11:17.400 "assigned_rate_limits": { 00:11:17.400 "rw_ios_per_sec": 0, 00:11:17.400 "rw_mbytes_per_sec": 0, 00:11:17.400 "r_mbytes_per_sec": 0, 00:11:17.400 "w_mbytes_per_sec": 0 00:11:17.400 }, 00:11:17.400 "claimed": true, 00:11:17.400 "claim_type": "exclusive_write", 00:11:17.400 "zoned": false, 00:11:17.400 "supported_io_types": { 00:11:17.400 "read": true, 00:11:17.400 "write": true, 00:11:17.400 "unmap": true, 00:11:17.400 "flush": true, 00:11:17.400 "reset": true, 00:11:17.400 "nvme_admin": false, 00:11:17.400 "nvme_io": false, 00:11:17.400 "nvme_io_md": false, 00:11:17.400 "write_zeroes": true, 00:11:17.400 "zcopy": true, 00:11:17.400 "get_zone_info": false, 00:11:17.400 "zone_management": false, 00:11:17.400 "zone_append": false, 00:11:17.400 "compare": false, 00:11:17.400 "compare_and_write": false, 00:11:17.400 "abort": true, 00:11:17.400 "seek_hole": false, 00:11:17.400 "seek_data": false, 00:11:17.400 "copy": true, 00:11:17.400 "nvme_iov_md": false 00:11:17.400 }, 00:11:17.400 "memory_domains": [ 00:11:17.400 { 00:11:17.400 "dma_device_id": "system", 00:11:17.400 "dma_device_type": 1 00:11:17.400 }, 00:11:17.400 { 00:11:17.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.400 "dma_device_type": 2 00:11:17.400 } 00:11:17.400 ], 00:11:17.400 "driver_specific": {} 00:11:17.400 } 00:11:17.400 ] 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.400 "name": "Existed_Raid", 00:11:17.400 "uuid": "94706882-309d-4833-af18-421715905d4f", 00:11:17.400 "strip_size_kb": 64, 00:11:17.400 "state": "configuring", 00:11:17.400 "raid_level": "raid0", 00:11:17.400 "superblock": true, 00:11:17.400 "num_base_bdevs": 4, 00:11:17.400 "num_base_bdevs_discovered": 3, 00:11:17.400 "num_base_bdevs_operational": 4, 00:11:17.400 "base_bdevs_list": [ 00:11:17.400 { 00:11:17.400 "name": "BaseBdev1", 00:11:17.400 "uuid": "bb079add-3ba1-44f0-b768-6fd5a8b5af87", 00:11:17.400 "is_configured": true, 00:11:17.400 "data_offset": 2048, 00:11:17.400 "data_size": 63488 00:11:17.400 }, 00:11:17.400 { 00:11:17.400 "name": "BaseBdev2", 00:11:17.400 "uuid": "3af69e1d-f59e-453b-bd3a-bd4be2833232", 00:11:17.400 "is_configured": true, 00:11:17.400 "data_offset": 2048, 00:11:17.400 "data_size": 63488 00:11:17.400 }, 00:11:17.400 { 00:11:17.400 "name": "BaseBdev3", 00:11:17.400 "uuid": "b70f12e3-f7a5-4e4f-82bb-12d6575351b9", 00:11:17.400 "is_configured": true, 00:11:17.400 "data_offset": 2048, 00:11:17.400 "data_size": 63488 00:11:17.400 }, 00:11:17.400 { 00:11:17.400 "name": "BaseBdev4", 00:11:17.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.400 "is_configured": false, 00:11:17.400 "data_offset": 0, 00:11:17.400 "data_size": 0 00:11:17.400 } 00:11:17.400 ] 00:11:17.400 }' 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.400 19:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.967 [2024-11-26 19:00:44.427934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:17.967 [2024-11-26 19:00:44.428529] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:17.967 [2024-11-26 19:00:44.428681] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:17.967 BaseBdev4 00:11:17.967 [2024-11-26 19:00:44.429121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:17.967 [2024-11-26 19:00:44.429350] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:17.967 [2024-11-26 19:00:44.429378] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:17.967 [2024-11-26 19:00:44.429580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.967 [ 00:11:17.967 { 00:11:17.967 "name": "BaseBdev4", 00:11:17.967 "aliases": [ 00:11:17.967 "ff5cef98-86dd-4a26-ac1f-e44b1f02968d" 00:11:17.967 ], 00:11:17.967 "product_name": "Malloc disk", 00:11:17.967 "block_size": 512, 00:11:17.967 "num_blocks": 65536, 00:11:17.967 "uuid": "ff5cef98-86dd-4a26-ac1f-e44b1f02968d", 00:11:17.967 "assigned_rate_limits": { 00:11:17.967 "rw_ios_per_sec": 0, 00:11:17.967 "rw_mbytes_per_sec": 0, 00:11:17.967 "r_mbytes_per_sec": 0, 00:11:17.967 "w_mbytes_per_sec": 0 00:11:17.967 }, 00:11:17.967 "claimed": true, 00:11:17.967 "claim_type": "exclusive_write", 00:11:17.967 "zoned": false, 00:11:17.967 "supported_io_types": { 00:11:17.967 "read": true, 00:11:17.967 "write": true, 00:11:17.967 "unmap": true, 00:11:17.967 "flush": true, 00:11:17.967 "reset": true, 00:11:17.967 "nvme_admin": false, 00:11:17.967 "nvme_io": false, 00:11:17.967 "nvme_io_md": false, 00:11:17.967 "write_zeroes": true, 00:11:17.967 "zcopy": true, 00:11:17.967 "get_zone_info": false, 00:11:17.967 "zone_management": false, 00:11:17.967 "zone_append": false, 00:11:17.967 "compare": false, 00:11:17.967 "compare_and_write": false, 00:11:17.967 "abort": true, 00:11:17.967 "seek_hole": false, 00:11:17.967 "seek_data": false, 00:11:17.967 "copy": true, 00:11:17.967 "nvme_iov_md": false 00:11:17.967 }, 00:11:17.967 "memory_domains": [ 00:11:17.967 { 00:11:17.967 "dma_device_id": "system", 00:11:17.967 "dma_device_type": 1 00:11:17.967 }, 00:11:17.967 { 00:11:17.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.967 "dma_device_type": 2 00:11:17.967 } 00:11:17.967 ], 00:11:17.967 "driver_specific": {} 00:11:17.967 } 00:11:17.967 ] 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.967 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.968 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.968 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.968 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.968 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.968 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.968 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.968 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.968 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.968 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.968 "name": "Existed_Raid", 00:11:17.968 "uuid": "94706882-309d-4833-af18-421715905d4f", 00:11:17.968 "strip_size_kb": 64, 00:11:17.968 "state": "online", 00:11:17.968 "raid_level": "raid0", 00:11:17.968 "superblock": true, 00:11:17.968 "num_base_bdevs": 4, 00:11:17.968 "num_base_bdevs_discovered": 4, 00:11:17.968 "num_base_bdevs_operational": 4, 00:11:17.968 "base_bdevs_list": [ 00:11:17.968 { 00:11:17.968 "name": "BaseBdev1", 00:11:17.968 "uuid": "bb079add-3ba1-44f0-b768-6fd5a8b5af87", 00:11:17.968 "is_configured": true, 00:11:17.968 "data_offset": 2048, 00:11:17.968 "data_size": 63488 00:11:17.968 }, 00:11:17.968 { 00:11:17.968 "name": "BaseBdev2", 00:11:17.968 "uuid": "3af69e1d-f59e-453b-bd3a-bd4be2833232", 00:11:17.968 "is_configured": true, 00:11:17.968 "data_offset": 2048, 00:11:17.968 "data_size": 63488 00:11:17.968 }, 00:11:17.968 { 00:11:17.968 "name": "BaseBdev3", 00:11:17.968 "uuid": "b70f12e3-f7a5-4e4f-82bb-12d6575351b9", 00:11:17.968 "is_configured": true, 00:11:17.968 "data_offset": 2048, 00:11:17.968 "data_size": 63488 00:11:17.968 }, 00:11:17.968 { 00:11:17.968 "name": "BaseBdev4", 00:11:17.968 "uuid": "ff5cef98-86dd-4a26-ac1f-e44b1f02968d", 00:11:17.968 "is_configured": true, 00:11:17.968 "data_offset": 2048, 00:11:17.968 "data_size": 63488 00:11:17.968 } 00:11:17.968 ] 00:11:17.968 }' 00:11:17.968 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.968 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.535 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:18.535 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:18.535 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:18.535 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:18.535 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:18.535 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:18.535 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:18.535 19:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:18.535 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.535 19:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.535 [2024-11-26 19:00:44.992790] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.535 19:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.535 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:18.535 "name": "Existed_Raid", 00:11:18.535 "aliases": [ 00:11:18.535 "94706882-309d-4833-af18-421715905d4f" 00:11:18.535 ], 00:11:18.535 "product_name": "Raid Volume", 00:11:18.535 "block_size": 512, 00:11:18.535 "num_blocks": 253952, 00:11:18.535 "uuid": "94706882-309d-4833-af18-421715905d4f", 00:11:18.535 "assigned_rate_limits": { 00:11:18.535 "rw_ios_per_sec": 0, 00:11:18.535 "rw_mbytes_per_sec": 0, 00:11:18.535 "r_mbytes_per_sec": 0, 00:11:18.535 "w_mbytes_per_sec": 0 00:11:18.535 }, 00:11:18.535 "claimed": false, 00:11:18.535 "zoned": false, 00:11:18.535 "supported_io_types": { 00:11:18.535 "read": true, 00:11:18.535 "write": true, 00:11:18.535 "unmap": true, 00:11:18.535 "flush": true, 00:11:18.535 "reset": true, 00:11:18.535 "nvme_admin": false, 00:11:18.535 "nvme_io": false, 00:11:18.535 "nvme_io_md": false, 00:11:18.535 "write_zeroes": true, 00:11:18.535 "zcopy": false, 00:11:18.535 "get_zone_info": false, 00:11:18.535 "zone_management": false, 00:11:18.535 "zone_append": false, 00:11:18.535 "compare": false, 00:11:18.535 "compare_and_write": false, 00:11:18.535 "abort": false, 00:11:18.535 "seek_hole": false, 00:11:18.535 "seek_data": false, 00:11:18.535 "copy": false, 00:11:18.535 "nvme_iov_md": false 00:11:18.535 }, 00:11:18.535 "memory_domains": [ 00:11:18.535 { 00:11:18.535 "dma_device_id": "system", 00:11:18.535 "dma_device_type": 1 00:11:18.535 }, 00:11:18.535 { 00:11:18.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.535 "dma_device_type": 2 00:11:18.535 }, 00:11:18.535 { 00:11:18.535 "dma_device_id": "system", 00:11:18.535 "dma_device_type": 1 00:11:18.535 }, 00:11:18.535 { 00:11:18.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.535 "dma_device_type": 2 00:11:18.535 }, 00:11:18.535 { 00:11:18.535 "dma_device_id": "system", 00:11:18.535 "dma_device_type": 1 00:11:18.535 }, 00:11:18.535 { 00:11:18.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.535 "dma_device_type": 2 00:11:18.535 }, 00:11:18.535 { 00:11:18.536 "dma_device_id": "system", 00:11:18.536 "dma_device_type": 1 00:11:18.536 }, 00:11:18.536 { 00:11:18.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.536 "dma_device_type": 2 00:11:18.536 } 00:11:18.536 ], 00:11:18.536 "driver_specific": { 00:11:18.536 "raid": { 00:11:18.536 "uuid": "94706882-309d-4833-af18-421715905d4f", 00:11:18.536 "strip_size_kb": 64, 00:11:18.536 "state": "online", 00:11:18.536 "raid_level": "raid0", 00:11:18.536 "superblock": true, 00:11:18.536 "num_base_bdevs": 4, 00:11:18.536 "num_base_bdevs_discovered": 4, 00:11:18.536 "num_base_bdevs_operational": 4, 00:11:18.536 "base_bdevs_list": [ 00:11:18.536 { 00:11:18.536 "name": "BaseBdev1", 00:11:18.536 "uuid": "bb079add-3ba1-44f0-b768-6fd5a8b5af87", 00:11:18.536 "is_configured": true, 00:11:18.536 "data_offset": 2048, 00:11:18.536 "data_size": 63488 00:11:18.536 }, 00:11:18.536 { 00:11:18.536 "name": "BaseBdev2", 00:11:18.536 "uuid": "3af69e1d-f59e-453b-bd3a-bd4be2833232", 00:11:18.536 "is_configured": true, 00:11:18.536 "data_offset": 2048, 00:11:18.536 "data_size": 63488 00:11:18.536 }, 00:11:18.536 { 00:11:18.536 "name": "BaseBdev3", 00:11:18.536 "uuid": "b70f12e3-f7a5-4e4f-82bb-12d6575351b9", 00:11:18.536 "is_configured": true, 00:11:18.536 "data_offset": 2048, 00:11:18.536 "data_size": 63488 00:11:18.536 }, 00:11:18.536 { 00:11:18.536 "name": "BaseBdev4", 00:11:18.536 "uuid": "ff5cef98-86dd-4a26-ac1f-e44b1f02968d", 00:11:18.536 "is_configured": true, 00:11:18.536 "data_offset": 2048, 00:11:18.536 "data_size": 63488 00:11:18.536 } 00:11:18.536 ] 00:11:18.536 } 00:11:18.536 } 00:11:18.536 }' 00:11:18.536 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:18.536 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:18.536 BaseBdev2 00:11:18.536 BaseBdev3 00:11:18.536 BaseBdev4' 00:11:18.536 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.536 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:18.536 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.536 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.536 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:18.536 19:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.536 19:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.795 19:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.795 [2024-11-26 19:00:45.364399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:18.795 [2024-11-26 19:00:45.364451] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:18.795 [2024-11-26 19:00:45.364529] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.053 19:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.053 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:19.053 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:19.053 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:19.053 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:19.053 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:19.053 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:19.053 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.053 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:19.053 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.053 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.053 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.053 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.053 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.053 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.053 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.053 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.053 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.053 19:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.053 19:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.053 19:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.053 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.054 "name": "Existed_Raid", 00:11:19.054 "uuid": "94706882-309d-4833-af18-421715905d4f", 00:11:19.054 "strip_size_kb": 64, 00:11:19.054 "state": "offline", 00:11:19.054 "raid_level": "raid0", 00:11:19.054 "superblock": true, 00:11:19.054 "num_base_bdevs": 4, 00:11:19.054 "num_base_bdevs_discovered": 3, 00:11:19.054 "num_base_bdevs_operational": 3, 00:11:19.054 "base_bdevs_list": [ 00:11:19.054 { 00:11:19.054 "name": null, 00:11:19.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.054 "is_configured": false, 00:11:19.054 "data_offset": 0, 00:11:19.054 "data_size": 63488 00:11:19.054 }, 00:11:19.054 { 00:11:19.054 "name": "BaseBdev2", 00:11:19.054 "uuid": "3af69e1d-f59e-453b-bd3a-bd4be2833232", 00:11:19.054 "is_configured": true, 00:11:19.054 "data_offset": 2048, 00:11:19.054 "data_size": 63488 00:11:19.054 }, 00:11:19.054 { 00:11:19.054 "name": "BaseBdev3", 00:11:19.054 "uuid": "b70f12e3-f7a5-4e4f-82bb-12d6575351b9", 00:11:19.054 "is_configured": true, 00:11:19.054 "data_offset": 2048, 00:11:19.054 "data_size": 63488 00:11:19.054 }, 00:11:19.054 { 00:11:19.054 "name": "BaseBdev4", 00:11:19.054 "uuid": "ff5cef98-86dd-4a26-ac1f-e44b1f02968d", 00:11:19.054 "is_configured": true, 00:11:19.054 "data_offset": 2048, 00:11:19.054 "data_size": 63488 00:11:19.054 } 00:11:19.054 ] 00:11:19.054 }' 00:11:19.054 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.054 19:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.621 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:19.621 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:19.621 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.621 19:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.621 19:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.621 19:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:19.621 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.621 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:19.621 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:19.621 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:19.621 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.621 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.621 [2024-11-26 19:00:46.045779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:19.621 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.621 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:19.621 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:19.621 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.621 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:19.621 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.621 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.621 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.621 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:19.621 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:19.621 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:19.621 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.621 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.621 [2024-11-26 19:00:46.208709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:19.880 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.880 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:19.880 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:19.880 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.880 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.880 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.880 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:19.880 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.880 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:19.880 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:19.880 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:19.880 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.880 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.880 [2024-11-26 19:00:46.370550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:19.880 [2024-11-26 19:00:46.370638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:19.880 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.880 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:19.880 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:19.880 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.880 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.880 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.880 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:19.880 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.139 BaseBdev2 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.139 [ 00:11:20.139 { 00:11:20.139 "name": "BaseBdev2", 00:11:20.139 "aliases": [ 00:11:20.139 "ff043e64-d530-4e89-ae36-6bddd9d81698" 00:11:20.139 ], 00:11:20.139 "product_name": "Malloc disk", 00:11:20.139 "block_size": 512, 00:11:20.139 "num_blocks": 65536, 00:11:20.139 "uuid": "ff043e64-d530-4e89-ae36-6bddd9d81698", 00:11:20.139 "assigned_rate_limits": { 00:11:20.139 "rw_ios_per_sec": 0, 00:11:20.139 "rw_mbytes_per_sec": 0, 00:11:20.139 "r_mbytes_per_sec": 0, 00:11:20.139 "w_mbytes_per_sec": 0 00:11:20.139 }, 00:11:20.139 "claimed": false, 00:11:20.139 "zoned": false, 00:11:20.139 "supported_io_types": { 00:11:20.139 "read": true, 00:11:20.139 "write": true, 00:11:20.139 "unmap": true, 00:11:20.139 "flush": true, 00:11:20.139 "reset": true, 00:11:20.139 "nvme_admin": false, 00:11:20.139 "nvme_io": false, 00:11:20.139 "nvme_io_md": false, 00:11:20.139 "write_zeroes": true, 00:11:20.139 "zcopy": true, 00:11:20.139 "get_zone_info": false, 00:11:20.139 "zone_management": false, 00:11:20.139 "zone_append": false, 00:11:20.139 "compare": false, 00:11:20.139 "compare_and_write": false, 00:11:20.139 "abort": true, 00:11:20.139 "seek_hole": false, 00:11:20.139 "seek_data": false, 00:11:20.139 "copy": true, 00:11:20.139 "nvme_iov_md": false 00:11:20.139 }, 00:11:20.139 "memory_domains": [ 00:11:20.139 { 00:11:20.139 "dma_device_id": "system", 00:11:20.139 "dma_device_type": 1 00:11:20.139 }, 00:11:20.139 { 00:11:20.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.139 "dma_device_type": 2 00:11:20.139 } 00:11:20.139 ], 00:11:20.139 "driver_specific": {} 00:11:20.139 } 00:11:20.139 ] 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.139 BaseBdev3 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.139 [ 00:11:20.139 { 00:11:20.139 "name": "BaseBdev3", 00:11:20.139 "aliases": [ 00:11:20.139 "80867cd5-a1a5-4338-981d-e4f98b5092e2" 00:11:20.139 ], 00:11:20.139 "product_name": "Malloc disk", 00:11:20.139 "block_size": 512, 00:11:20.139 "num_blocks": 65536, 00:11:20.139 "uuid": "80867cd5-a1a5-4338-981d-e4f98b5092e2", 00:11:20.139 "assigned_rate_limits": { 00:11:20.139 "rw_ios_per_sec": 0, 00:11:20.139 "rw_mbytes_per_sec": 0, 00:11:20.139 "r_mbytes_per_sec": 0, 00:11:20.139 "w_mbytes_per_sec": 0 00:11:20.139 }, 00:11:20.139 "claimed": false, 00:11:20.139 "zoned": false, 00:11:20.139 "supported_io_types": { 00:11:20.139 "read": true, 00:11:20.139 "write": true, 00:11:20.139 "unmap": true, 00:11:20.139 "flush": true, 00:11:20.139 "reset": true, 00:11:20.139 "nvme_admin": false, 00:11:20.139 "nvme_io": false, 00:11:20.139 "nvme_io_md": false, 00:11:20.139 "write_zeroes": true, 00:11:20.139 "zcopy": true, 00:11:20.139 "get_zone_info": false, 00:11:20.139 "zone_management": false, 00:11:20.139 "zone_append": false, 00:11:20.139 "compare": false, 00:11:20.139 "compare_and_write": false, 00:11:20.139 "abort": true, 00:11:20.139 "seek_hole": false, 00:11:20.139 "seek_data": false, 00:11:20.139 "copy": true, 00:11:20.139 "nvme_iov_md": false 00:11:20.139 }, 00:11:20.139 "memory_domains": [ 00:11:20.139 { 00:11:20.139 "dma_device_id": "system", 00:11:20.139 "dma_device_type": 1 00:11:20.139 }, 00:11:20.139 { 00:11:20.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.139 "dma_device_type": 2 00:11:20.139 } 00:11:20.139 ], 00:11:20.139 "driver_specific": {} 00:11:20.139 } 00:11:20.139 ] 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.139 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:20.140 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:20.140 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:20.140 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:20.140 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.140 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.140 BaseBdev4 00:11:20.140 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.140 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:20.140 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:20.140 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.140 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:20.140 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.140 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.140 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.140 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.140 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.398 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.398 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:20.398 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.398 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.398 [ 00:11:20.398 { 00:11:20.398 "name": "BaseBdev4", 00:11:20.398 "aliases": [ 00:11:20.398 "ffec91c0-b86a-44c9-b09c-bae15e0756c4" 00:11:20.398 ], 00:11:20.398 "product_name": "Malloc disk", 00:11:20.398 "block_size": 512, 00:11:20.398 "num_blocks": 65536, 00:11:20.398 "uuid": "ffec91c0-b86a-44c9-b09c-bae15e0756c4", 00:11:20.398 "assigned_rate_limits": { 00:11:20.398 "rw_ios_per_sec": 0, 00:11:20.398 "rw_mbytes_per_sec": 0, 00:11:20.398 "r_mbytes_per_sec": 0, 00:11:20.398 "w_mbytes_per_sec": 0 00:11:20.398 }, 00:11:20.398 "claimed": false, 00:11:20.398 "zoned": false, 00:11:20.398 "supported_io_types": { 00:11:20.398 "read": true, 00:11:20.398 "write": true, 00:11:20.399 "unmap": true, 00:11:20.399 "flush": true, 00:11:20.399 "reset": true, 00:11:20.399 "nvme_admin": false, 00:11:20.399 "nvme_io": false, 00:11:20.399 "nvme_io_md": false, 00:11:20.399 "write_zeroes": true, 00:11:20.399 "zcopy": true, 00:11:20.399 "get_zone_info": false, 00:11:20.399 "zone_management": false, 00:11:20.399 "zone_append": false, 00:11:20.399 "compare": false, 00:11:20.399 "compare_and_write": false, 00:11:20.399 "abort": true, 00:11:20.399 "seek_hole": false, 00:11:20.399 "seek_data": false, 00:11:20.399 "copy": true, 00:11:20.399 "nvme_iov_md": false 00:11:20.399 }, 00:11:20.399 "memory_domains": [ 00:11:20.399 { 00:11:20.399 "dma_device_id": "system", 00:11:20.399 "dma_device_type": 1 00:11:20.399 }, 00:11:20.399 { 00:11:20.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.399 "dma_device_type": 2 00:11:20.399 } 00:11:20.399 ], 00:11:20.399 "driver_specific": {} 00:11:20.399 } 00:11:20.399 ] 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.399 [2024-11-26 19:00:46.790532] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:20.399 [2024-11-26 19:00:46.790788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:20.399 [2024-11-26 19:00:46.790952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:20.399 [2024-11-26 19:00:46.793911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:20.399 [2024-11-26 19:00:46.794132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.399 "name": "Existed_Raid", 00:11:20.399 "uuid": "eab0c4b8-509d-49d3-a02d-f183011de767", 00:11:20.399 "strip_size_kb": 64, 00:11:20.399 "state": "configuring", 00:11:20.399 "raid_level": "raid0", 00:11:20.399 "superblock": true, 00:11:20.399 "num_base_bdevs": 4, 00:11:20.399 "num_base_bdevs_discovered": 3, 00:11:20.399 "num_base_bdevs_operational": 4, 00:11:20.399 "base_bdevs_list": [ 00:11:20.399 { 00:11:20.399 "name": "BaseBdev1", 00:11:20.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.399 "is_configured": false, 00:11:20.399 "data_offset": 0, 00:11:20.399 "data_size": 0 00:11:20.399 }, 00:11:20.399 { 00:11:20.399 "name": "BaseBdev2", 00:11:20.399 "uuid": "ff043e64-d530-4e89-ae36-6bddd9d81698", 00:11:20.399 "is_configured": true, 00:11:20.399 "data_offset": 2048, 00:11:20.399 "data_size": 63488 00:11:20.399 }, 00:11:20.399 { 00:11:20.399 "name": "BaseBdev3", 00:11:20.399 "uuid": "80867cd5-a1a5-4338-981d-e4f98b5092e2", 00:11:20.399 "is_configured": true, 00:11:20.399 "data_offset": 2048, 00:11:20.399 "data_size": 63488 00:11:20.399 }, 00:11:20.399 { 00:11:20.399 "name": "BaseBdev4", 00:11:20.399 "uuid": "ffec91c0-b86a-44c9-b09c-bae15e0756c4", 00:11:20.399 "is_configured": true, 00:11:20.399 "data_offset": 2048, 00:11:20.399 "data_size": 63488 00:11:20.399 } 00:11:20.399 ] 00:11:20.399 }' 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.399 19:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.966 19:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:20.966 19:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.966 19:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.966 [2024-11-26 19:00:47.354796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:20.966 19:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.966 19:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:20.966 19:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.966 19:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.966 19:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.966 19:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.966 19:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.966 19:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.966 19:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.966 19:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.966 19:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.966 19:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.966 19:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.966 19:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.966 19:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.966 19:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.966 19:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.966 "name": "Existed_Raid", 00:11:20.966 "uuid": "eab0c4b8-509d-49d3-a02d-f183011de767", 00:11:20.966 "strip_size_kb": 64, 00:11:20.966 "state": "configuring", 00:11:20.966 "raid_level": "raid0", 00:11:20.966 "superblock": true, 00:11:20.966 "num_base_bdevs": 4, 00:11:20.966 "num_base_bdevs_discovered": 2, 00:11:20.966 "num_base_bdevs_operational": 4, 00:11:20.966 "base_bdevs_list": [ 00:11:20.966 { 00:11:20.966 "name": "BaseBdev1", 00:11:20.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.966 "is_configured": false, 00:11:20.967 "data_offset": 0, 00:11:20.967 "data_size": 0 00:11:20.967 }, 00:11:20.967 { 00:11:20.967 "name": null, 00:11:20.967 "uuid": "ff043e64-d530-4e89-ae36-6bddd9d81698", 00:11:20.967 "is_configured": false, 00:11:20.967 "data_offset": 0, 00:11:20.967 "data_size": 63488 00:11:20.967 }, 00:11:20.967 { 00:11:20.967 "name": "BaseBdev3", 00:11:20.967 "uuid": "80867cd5-a1a5-4338-981d-e4f98b5092e2", 00:11:20.967 "is_configured": true, 00:11:20.967 "data_offset": 2048, 00:11:20.967 "data_size": 63488 00:11:20.967 }, 00:11:20.967 { 00:11:20.967 "name": "BaseBdev4", 00:11:20.967 "uuid": "ffec91c0-b86a-44c9-b09c-bae15e0756c4", 00:11:20.967 "is_configured": true, 00:11:20.967 "data_offset": 2048, 00:11:20.967 "data_size": 63488 00:11:20.967 } 00:11:20.967 ] 00:11:20.967 }' 00:11:20.967 19:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.967 19:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.534 19:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.534 19:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.534 19:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:21.534 19:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.534 19:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.534 19:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:21.534 19:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:21.534 19:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.534 19:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.534 [2024-11-26 19:00:48.040819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.534 BaseBdev1 00:11:21.534 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.535 [ 00:11:21.535 { 00:11:21.535 "name": "BaseBdev1", 00:11:21.535 "aliases": [ 00:11:21.535 "c791a3f1-883d-4e88-9be4-568c62c9815c" 00:11:21.535 ], 00:11:21.535 "product_name": "Malloc disk", 00:11:21.535 "block_size": 512, 00:11:21.535 "num_blocks": 65536, 00:11:21.535 "uuid": "c791a3f1-883d-4e88-9be4-568c62c9815c", 00:11:21.535 "assigned_rate_limits": { 00:11:21.535 "rw_ios_per_sec": 0, 00:11:21.535 "rw_mbytes_per_sec": 0, 00:11:21.535 "r_mbytes_per_sec": 0, 00:11:21.535 "w_mbytes_per_sec": 0 00:11:21.535 }, 00:11:21.535 "claimed": true, 00:11:21.535 "claim_type": "exclusive_write", 00:11:21.535 "zoned": false, 00:11:21.535 "supported_io_types": { 00:11:21.535 "read": true, 00:11:21.535 "write": true, 00:11:21.535 "unmap": true, 00:11:21.535 "flush": true, 00:11:21.535 "reset": true, 00:11:21.535 "nvme_admin": false, 00:11:21.535 "nvme_io": false, 00:11:21.535 "nvme_io_md": false, 00:11:21.535 "write_zeroes": true, 00:11:21.535 "zcopy": true, 00:11:21.535 "get_zone_info": false, 00:11:21.535 "zone_management": false, 00:11:21.535 "zone_append": false, 00:11:21.535 "compare": false, 00:11:21.535 "compare_and_write": false, 00:11:21.535 "abort": true, 00:11:21.535 "seek_hole": false, 00:11:21.535 "seek_data": false, 00:11:21.535 "copy": true, 00:11:21.535 "nvme_iov_md": false 00:11:21.535 }, 00:11:21.535 "memory_domains": [ 00:11:21.535 { 00:11:21.535 "dma_device_id": "system", 00:11:21.535 "dma_device_type": 1 00:11:21.535 }, 00:11:21.535 { 00:11:21.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.535 "dma_device_type": 2 00:11:21.535 } 00:11:21.535 ], 00:11:21.535 "driver_specific": {} 00:11:21.535 } 00:11:21.535 ] 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.535 "name": "Existed_Raid", 00:11:21.535 "uuid": "eab0c4b8-509d-49d3-a02d-f183011de767", 00:11:21.535 "strip_size_kb": 64, 00:11:21.535 "state": "configuring", 00:11:21.535 "raid_level": "raid0", 00:11:21.535 "superblock": true, 00:11:21.535 "num_base_bdevs": 4, 00:11:21.535 "num_base_bdevs_discovered": 3, 00:11:21.535 "num_base_bdevs_operational": 4, 00:11:21.535 "base_bdevs_list": [ 00:11:21.535 { 00:11:21.535 "name": "BaseBdev1", 00:11:21.535 "uuid": "c791a3f1-883d-4e88-9be4-568c62c9815c", 00:11:21.535 "is_configured": true, 00:11:21.535 "data_offset": 2048, 00:11:21.535 "data_size": 63488 00:11:21.535 }, 00:11:21.535 { 00:11:21.535 "name": null, 00:11:21.535 "uuid": "ff043e64-d530-4e89-ae36-6bddd9d81698", 00:11:21.535 "is_configured": false, 00:11:21.535 "data_offset": 0, 00:11:21.535 "data_size": 63488 00:11:21.535 }, 00:11:21.535 { 00:11:21.535 "name": "BaseBdev3", 00:11:21.535 "uuid": "80867cd5-a1a5-4338-981d-e4f98b5092e2", 00:11:21.535 "is_configured": true, 00:11:21.535 "data_offset": 2048, 00:11:21.535 "data_size": 63488 00:11:21.535 }, 00:11:21.535 { 00:11:21.535 "name": "BaseBdev4", 00:11:21.535 "uuid": "ffec91c0-b86a-44c9-b09c-bae15e0756c4", 00:11:21.535 "is_configured": true, 00:11:21.535 "data_offset": 2048, 00:11:21.535 "data_size": 63488 00:11:21.535 } 00:11:21.535 ] 00:11:21.535 }' 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.535 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.102 [2024-11-26 19:00:48.645113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.102 "name": "Existed_Raid", 00:11:22.102 "uuid": "eab0c4b8-509d-49d3-a02d-f183011de767", 00:11:22.102 "strip_size_kb": 64, 00:11:22.102 "state": "configuring", 00:11:22.102 "raid_level": "raid0", 00:11:22.102 "superblock": true, 00:11:22.102 "num_base_bdevs": 4, 00:11:22.102 "num_base_bdevs_discovered": 2, 00:11:22.102 "num_base_bdevs_operational": 4, 00:11:22.102 "base_bdevs_list": [ 00:11:22.102 { 00:11:22.102 "name": "BaseBdev1", 00:11:22.102 "uuid": "c791a3f1-883d-4e88-9be4-568c62c9815c", 00:11:22.102 "is_configured": true, 00:11:22.102 "data_offset": 2048, 00:11:22.102 "data_size": 63488 00:11:22.102 }, 00:11:22.102 { 00:11:22.102 "name": null, 00:11:22.102 "uuid": "ff043e64-d530-4e89-ae36-6bddd9d81698", 00:11:22.102 "is_configured": false, 00:11:22.102 "data_offset": 0, 00:11:22.102 "data_size": 63488 00:11:22.102 }, 00:11:22.102 { 00:11:22.102 "name": null, 00:11:22.102 "uuid": "80867cd5-a1a5-4338-981d-e4f98b5092e2", 00:11:22.102 "is_configured": false, 00:11:22.102 "data_offset": 0, 00:11:22.102 "data_size": 63488 00:11:22.102 }, 00:11:22.102 { 00:11:22.102 "name": "BaseBdev4", 00:11:22.102 "uuid": "ffec91c0-b86a-44c9-b09c-bae15e0756c4", 00:11:22.102 "is_configured": true, 00:11:22.102 "data_offset": 2048, 00:11:22.102 "data_size": 63488 00:11:22.102 } 00:11:22.102 ] 00:11:22.102 }' 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.102 19:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.669 [2024-11-26 19:00:49.253324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.669 19:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.931 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.931 "name": "Existed_Raid", 00:11:22.931 "uuid": "eab0c4b8-509d-49d3-a02d-f183011de767", 00:11:22.931 "strip_size_kb": 64, 00:11:22.931 "state": "configuring", 00:11:22.931 "raid_level": "raid0", 00:11:22.931 "superblock": true, 00:11:22.931 "num_base_bdevs": 4, 00:11:22.931 "num_base_bdevs_discovered": 3, 00:11:22.931 "num_base_bdevs_operational": 4, 00:11:22.931 "base_bdevs_list": [ 00:11:22.931 { 00:11:22.931 "name": "BaseBdev1", 00:11:22.931 "uuid": "c791a3f1-883d-4e88-9be4-568c62c9815c", 00:11:22.931 "is_configured": true, 00:11:22.931 "data_offset": 2048, 00:11:22.931 "data_size": 63488 00:11:22.931 }, 00:11:22.931 { 00:11:22.931 "name": null, 00:11:22.931 "uuid": "ff043e64-d530-4e89-ae36-6bddd9d81698", 00:11:22.931 "is_configured": false, 00:11:22.931 "data_offset": 0, 00:11:22.931 "data_size": 63488 00:11:22.931 }, 00:11:22.931 { 00:11:22.931 "name": "BaseBdev3", 00:11:22.931 "uuid": "80867cd5-a1a5-4338-981d-e4f98b5092e2", 00:11:22.931 "is_configured": true, 00:11:22.931 "data_offset": 2048, 00:11:22.931 "data_size": 63488 00:11:22.931 }, 00:11:22.931 { 00:11:22.931 "name": "BaseBdev4", 00:11:22.931 "uuid": "ffec91c0-b86a-44c9-b09c-bae15e0756c4", 00:11:22.931 "is_configured": true, 00:11:22.931 "data_offset": 2048, 00:11:22.931 "data_size": 63488 00:11:22.931 } 00:11:22.931 ] 00:11:22.931 }' 00:11:22.931 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.931 19:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.197 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:23.197 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.197 19:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.197 19:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.456 19:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.457 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:23.457 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:23.457 19:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.457 19:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.457 [2024-11-26 19:00:49.857529] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:23.457 19:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.457 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:23.457 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.457 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.457 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:23.457 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.457 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.457 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.457 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.457 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.457 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.457 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.457 19:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.457 19:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.457 19:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.457 19:00:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.457 19:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.457 "name": "Existed_Raid", 00:11:23.457 "uuid": "eab0c4b8-509d-49d3-a02d-f183011de767", 00:11:23.457 "strip_size_kb": 64, 00:11:23.457 "state": "configuring", 00:11:23.457 "raid_level": "raid0", 00:11:23.457 "superblock": true, 00:11:23.457 "num_base_bdevs": 4, 00:11:23.457 "num_base_bdevs_discovered": 2, 00:11:23.457 "num_base_bdevs_operational": 4, 00:11:23.457 "base_bdevs_list": [ 00:11:23.457 { 00:11:23.457 "name": null, 00:11:23.457 "uuid": "c791a3f1-883d-4e88-9be4-568c62c9815c", 00:11:23.457 "is_configured": false, 00:11:23.457 "data_offset": 0, 00:11:23.457 "data_size": 63488 00:11:23.457 }, 00:11:23.457 { 00:11:23.457 "name": null, 00:11:23.457 "uuid": "ff043e64-d530-4e89-ae36-6bddd9d81698", 00:11:23.457 "is_configured": false, 00:11:23.457 "data_offset": 0, 00:11:23.457 "data_size": 63488 00:11:23.457 }, 00:11:23.457 { 00:11:23.457 "name": "BaseBdev3", 00:11:23.457 "uuid": "80867cd5-a1a5-4338-981d-e4f98b5092e2", 00:11:23.457 "is_configured": true, 00:11:23.457 "data_offset": 2048, 00:11:23.457 "data_size": 63488 00:11:23.457 }, 00:11:23.457 { 00:11:23.457 "name": "BaseBdev4", 00:11:23.457 "uuid": "ffec91c0-b86a-44c9-b09c-bae15e0756c4", 00:11:23.457 "is_configured": true, 00:11:23.457 "data_offset": 2048, 00:11:23.457 "data_size": 63488 00:11:23.457 } 00:11:23.457 ] 00:11:23.457 }' 00:11:23.457 19:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.457 19:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.023 [2024-11-26 19:00:50.540812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.023 19:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.023 "name": "Existed_Raid", 00:11:24.023 "uuid": "eab0c4b8-509d-49d3-a02d-f183011de767", 00:11:24.023 "strip_size_kb": 64, 00:11:24.023 "state": "configuring", 00:11:24.023 "raid_level": "raid0", 00:11:24.023 "superblock": true, 00:11:24.023 "num_base_bdevs": 4, 00:11:24.023 "num_base_bdevs_discovered": 3, 00:11:24.023 "num_base_bdevs_operational": 4, 00:11:24.023 "base_bdevs_list": [ 00:11:24.023 { 00:11:24.023 "name": null, 00:11:24.023 "uuid": "c791a3f1-883d-4e88-9be4-568c62c9815c", 00:11:24.023 "is_configured": false, 00:11:24.023 "data_offset": 0, 00:11:24.023 "data_size": 63488 00:11:24.023 }, 00:11:24.023 { 00:11:24.023 "name": "BaseBdev2", 00:11:24.024 "uuid": "ff043e64-d530-4e89-ae36-6bddd9d81698", 00:11:24.024 "is_configured": true, 00:11:24.024 "data_offset": 2048, 00:11:24.024 "data_size": 63488 00:11:24.024 }, 00:11:24.024 { 00:11:24.024 "name": "BaseBdev3", 00:11:24.024 "uuid": "80867cd5-a1a5-4338-981d-e4f98b5092e2", 00:11:24.024 "is_configured": true, 00:11:24.024 "data_offset": 2048, 00:11:24.024 "data_size": 63488 00:11:24.024 }, 00:11:24.024 { 00:11:24.024 "name": "BaseBdev4", 00:11:24.024 "uuid": "ffec91c0-b86a-44c9-b09c-bae15e0756c4", 00:11:24.024 "is_configured": true, 00:11:24.024 "data_offset": 2048, 00:11:24.024 "data_size": 63488 00:11:24.024 } 00:11:24.024 ] 00:11:24.024 }' 00:11:24.024 19:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.024 19:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.591 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.591 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:24.591 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.591 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.591 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.591 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:24.591 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.591 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:24.591 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.591 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.591 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.591 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c791a3f1-883d-4e88-9be4-568c62c9815c 00:11:24.591 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.591 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.850 [2024-11-26 19:00:51.224683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:24.850 [2024-11-26 19:00:51.224995] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:24.850 [2024-11-26 19:00:51.225015] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:24.850 NewBaseBdev 00:11:24.850 [2024-11-26 19:00:51.225390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:24.850 [2024-11-26 19:00:51.225582] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:24.850 [2024-11-26 19:00:51.225700] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:24.850 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.850 [2024-11-26 19:00:51.225885] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.850 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:24.850 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:24.850 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.850 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:24.850 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.850 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.850 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.850 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.850 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.850 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.851 [ 00:11:24.851 { 00:11:24.851 "name": "NewBaseBdev", 00:11:24.851 "aliases": [ 00:11:24.851 "c791a3f1-883d-4e88-9be4-568c62c9815c" 00:11:24.851 ], 00:11:24.851 "product_name": "Malloc disk", 00:11:24.851 "block_size": 512, 00:11:24.851 "num_blocks": 65536, 00:11:24.851 "uuid": "c791a3f1-883d-4e88-9be4-568c62c9815c", 00:11:24.851 "assigned_rate_limits": { 00:11:24.851 "rw_ios_per_sec": 0, 00:11:24.851 "rw_mbytes_per_sec": 0, 00:11:24.851 "r_mbytes_per_sec": 0, 00:11:24.851 "w_mbytes_per_sec": 0 00:11:24.851 }, 00:11:24.851 "claimed": true, 00:11:24.851 "claim_type": "exclusive_write", 00:11:24.851 "zoned": false, 00:11:24.851 "supported_io_types": { 00:11:24.851 "read": true, 00:11:24.851 "write": true, 00:11:24.851 "unmap": true, 00:11:24.851 "flush": true, 00:11:24.851 "reset": true, 00:11:24.851 "nvme_admin": false, 00:11:24.851 "nvme_io": false, 00:11:24.851 "nvme_io_md": false, 00:11:24.851 "write_zeroes": true, 00:11:24.851 "zcopy": true, 00:11:24.851 "get_zone_info": false, 00:11:24.851 "zone_management": false, 00:11:24.851 "zone_append": false, 00:11:24.851 "compare": false, 00:11:24.851 "compare_and_write": false, 00:11:24.851 "abort": true, 00:11:24.851 "seek_hole": false, 00:11:24.851 "seek_data": false, 00:11:24.851 "copy": true, 00:11:24.851 "nvme_iov_md": false 00:11:24.851 }, 00:11:24.851 "memory_domains": [ 00:11:24.851 { 00:11:24.851 "dma_device_id": "system", 00:11:24.851 "dma_device_type": 1 00:11:24.851 }, 00:11:24.851 { 00:11:24.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.851 "dma_device_type": 2 00:11:24.851 } 00:11:24.851 ], 00:11:24.851 "driver_specific": {} 00:11:24.851 } 00:11:24.851 ] 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.851 "name": "Existed_Raid", 00:11:24.851 "uuid": "eab0c4b8-509d-49d3-a02d-f183011de767", 00:11:24.851 "strip_size_kb": 64, 00:11:24.851 "state": "online", 00:11:24.851 "raid_level": "raid0", 00:11:24.851 "superblock": true, 00:11:24.851 "num_base_bdevs": 4, 00:11:24.851 "num_base_bdevs_discovered": 4, 00:11:24.851 "num_base_bdevs_operational": 4, 00:11:24.851 "base_bdevs_list": [ 00:11:24.851 { 00:11:24.851 "name": "NewBaseBdev", 00:11:24.851 "uuid": "c791a3f1-883d-4e88-9be4-568c62c9815c", 00:11:24.851 "is_configured": true, 00:11:24.851 "data_offset": 2048, 00:11:24.851 "data_size": 63488 00:11:24.851 }, 00:11:24.851 { 00:11:24.851 "name": "BaseBdev2", 00:11:24.851 "uuid": "ff043e64-d530-4e89-ae36-6bddd9d81698", 00:11:24.851 "is_configured": true, 00:11:24.851 "data_offset": 2048, 00:11:24.851 "data_size": 63488 00:11:24.851 }, 00:11:24.851 { 00:11:24.851 "name": "BaseBdev3", 00:11:24.851 "uuid": "80867cd5-a1a5-4338-981d-e4f98b5092e2", 00:11:24.851 "is_configured": true, 00:11:24.851 "data_offset": 2048, 00:11:24.851 "data_size": 63488 00:11:24.851 }, 00:11:24.851 { 00:11:24.851 "name": "BaseBdev4", 00:11:24.851 "uuid": "ffec91c0-b86a-44c9-b09c-bae15e0756c4", 00:11:24.851 "is_configured": true, 00:11:24.851 "data_offset": 2048, 00:11:24.851 "data_size": 63488 00:11:24.851 } 00:11:24.851 ] 00:11:24.851 }' 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.851 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.418 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:25.418 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:25.418 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:25.418 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:25.418 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:25.418 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:25.418 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:25.418 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:25.418 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.418 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.418 [2024-11-26 19:00:51.805393] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:25.418 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.418 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:25.418 "name": "Existed_Raid", 00:11:25.418 "aliases": [ 00:11:25.418 "eab0c4b8-509d-49d3-a02d-f183011de767" 00:11:25.418 ], 00:11:25.418 "product_name": "Raid Volume", 00:11:25.418 "block_size": 512, 00:11:25.418 "num_blocks": 253952, 00:11:25.418 "uuid": "eab0c4b8-509d-49d3-a02d-f183011de767", 00:11:25.418 "assigned_rate_limits": { 00:11:25.418 "rw_ios_per_sec": 0, 00:11:25.418 "rw_mbytes_per_sec": 0, 00:11:25.418 "r_mbytes_per_sec": 0, 00:11:25.418 "w_mbytes_per_sec": 0 00:11:25.418 }, 00:11:25.418 "claimed": false, 00:11:25.418 "zoned": false, 00:11:25.418 "supported_io_types": { 00:11:25.418 "read": true, 00:11:25.418 "write": true, 00:11:25.418 "unmap": true, 00:11:25.418 "flush": true, 00:11:25.418 "reset": true, 00:11:25.418 "nvme_admin": false, 00:11:25.418 "nvme_io": false, 00:11:25.418 "nvme_io_md": false, 00:11:25.418 "write_zeroes": true, 00:11:25.418 "zcopy": false, 00:11:25.418 "get_zone_info": false, 00:11:25.418 "zone_management": false, 00:11:25.418 "zone_append": false, 00:11:25.418 "compare": false, 00:11:25.418 "compare_and_write": false, 00:11:25.418 "abort": false, 00:11:25.418 "seek_hole": false, 00:11:25.418 "seek_data": false, 00:11:25.418 "copy": false, 00:11:25.418 "nvme_iov_md": false 00:11:25.418 }, 00:11:25.418 "memory_domains": [ 00:11:25.418 { 00:11:25.418 "dma_device_id": "system", 00:11:25.418 "dma_device_type": 1 00:11:25.418 }, 00:11:25.418 { 00:11:25.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.418 "dma_device_type": 2 00:11:25.418 }, 00:11:25.418 { 00:11:25.418 "dma_device_id": "system", 00:11:25.418 "dma_device_type": 1 00:11:25.418 }, 00:11:25.418 { 00:11:25.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.418 "dma_device_type": 2 00:11:25.418 }, 00:11:25.418 { 00:11:25.418 "dma_device_id": "system", 00:11:25.418 "dma_device_type": 1 00:11:25.418 }, 00:11:25.418 { 00:11:25.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.419 "dma_device_type": 2 00:11:25.419 }, 00:11:25.419 { 00:11:25.419 "dma_device_id": "system", 00:11:25.419 "dma_device_type": 1 00:11:25.419 }, 00:11:25.419 { 00:11:25.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.419 "dma_device_type": 2 00:11:25.419 } 00:11:25.419 ], 00:11:25.419 "driver_specific": { 00:11:25.419 "raid": { 00:11:25.419 "uuid": "eab0c4b8-509d-49d3-a02d-f183011de767", 00:11:25.419 "strip_size_kb": 64, 00:11:25.419 "state": "online", 00:11:25.419 "raid_level": "raid0", 00:11:25.419 "superblock": true, 00:11:25.419 "num_base_bdevs": 4, 00:11:25.419 "num_base_bdevs_discovered": 4, 00:11:25.419 "num_base_bdevs_operational": 4, 00:11:25.419 "base_bdevs_list": [ 00:11:25.419 { 00:11:25.419 "name": "NewBaseBdev", 00:11:25.419 "uuid": "c791a3f1-883d-4e88-9be4-568c62c9815c", 00:11:25.419 "is_configured": true, 00:11:25.419 "data_offset": 2048, 00:11:25.419 "data_size": 63488 00:11:25.419 }, 00:11:25.419 { 00:11:25.419 "name": "BaseBdev2", 00:11:25.419 "uuid": "ff043e64-d530-4e89-ae36-6bddd9d81698", 00:11:25.419 "is_configured": true, 00:11:25.419 "data_offset": 2048, 00:11:25.419 "data_size": 63488 00:11:25.419 }, 00:11:25.419 { 00:11:25.419 "name": "BaseBdev3", 00:11:25.419 "uuid": "80867cd5-a1a5-4338-981d-e4f98b5092e2", 00:11:25.419 "is_configured": true, 00:11:25.419 "data_offset": 2048, 00:11:25.419 "data_size": 63488 00:11:25.419 }, 00:11:25.419 { 00:11:25.419 "name": "BaseBdev4", 00:11:25.419 "uuid": "ffec91c0-b86a-44c9-b09c-bae15e0756c4", 00:11:25.419 "is_configured": true, 00:11:25.419 "data_offset": 2048, 00:11:25.419 "data_size": 63488 00:11:25.419 } 00:11:25.419 ] 00:11:25.419 } 00:11:25.419 } 00:11:25.419 }' 00:11:25.419 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:25.419 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:25.419 BaseBdev2 00:11:25.419 BaseBdev3 00:11:25.419 BaseBdev4' 00:11:25.419 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.419 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:25.419 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.419 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:25.419 19:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.419 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.419 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.419 19:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.419 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.419 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.419 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.419 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:25.419 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.419 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.419 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.677 [2024-11-26 19:00:52.209026] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:25.677 [2024-11-26 19:00:52.209080] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.677 [2024-11-26 19:00:52.209189] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.677 [2024-11-26 19:00:52.209315] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.677 [2024-11-26 19:00:52.209343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70518 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70518 ']' 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70518 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70518 00:11:25.677 killing process with pid 70518 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70518' 00:11:25.677 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70518 00:11:25.677 [2024-11-26 19:00:52.250274] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:25.678 19:00:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70518 00:11:26.242 [2024-11-26 19:00:52.618897] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:27.177 ************************************ 00:11:27.177 END TEST raid_state_function_test_sb 00:11:27.177 ************************************ 00:11:27.177 19:00:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:27.177 00:11:27.177 real 0m13.388s 00:11:27.177 user 0m22.069s 00:11:27.177 sys 0m1.895s 00:11:27.177 19:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.177 19:00:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.471 19:00:53 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:27.471 19:00:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:27.471 19:00:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.471 19:00:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:27.471 ************************************ 00:11:27.471 START TEST raid_superblock_test 00:11:27.471 ************************************ 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:27.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71208 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71208 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71208 ']' 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.471 19:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.471 [2024-11-26 19:00:53.944442] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:11:27.471 [2024-11-26 19:00:53.944897] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71208 ] 00:11:27.728 [2024-11-26 19:00:54.137184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.728 [2024-11-26 19:00:54.314518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.984 [2024-11-26 19:00:54.542702] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.984 [2024-11-26 19:00:54.542769] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.241 19:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.241 19:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:28.241 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:28.241 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:28.241 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:28.241 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:28.241 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:28.241 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:28.241 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:28.241 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:28.241 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:28.241 19:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.241 19:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.499 malloc1 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.499 [2024-11-26 19:00:54.905866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:28.499 [2024-11-26 19:00:54.906118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.499 [2024-11-26 19:00:54.906160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:28.499 [2024-11-26 19:00:54.906177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.499 [2024-11-26 19:00:54.909211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.499 [2024-11-26 19:00:54.909402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:28.499 pt1 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.499 malloc2 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.499 [2024-11-26 19:00:54.958438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:28.499 [2024-11-26 19:00:54.958499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.499 [2024-11-26 19:00:54.958536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:28.499 [2024-11-26 19:00:54.958550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.499 [2024-11-26 19:00:54.961875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.499 [2024-11-26 19:00:54.961928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:28.499 pt2 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.499 19:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.499 malloc3 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.499 [2024-11-26 19:00:55.027966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:28.499 [2024-11-26 19:00:55.028038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.499 [2024-11-26 19:00:55.028068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:28.499 [2024-11-26 19:00:55.028083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.499 [2024-11-26 19:00:55.030988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.499 [2024-11-26 19:00:55.031043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:28.499 pt3 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.499 malloc4 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.499 [2024-11-26 19:00:55.088840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:28.499 [2024-11-26 19:00:55.088922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.499 [2024-11-26 19:00:55.088952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:28.499 [2024-11-26 19:00:55.088984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.499 [2024-11-26 19:00:55.091940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.499 [2024-11-26 19:00:55.091982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:28.499 pt4 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.499 [2024-11-26 19:00:55.100892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:28.499 [2024-11-26 19:00:55.103511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:28.499 [2024-11-26 19:00:55.103632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:28.499 [2024-11-26 19:00:55.103735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:28.499 [2024-11-26 19:00:55.103981] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:28.499 [2024-11-26 19:00:55.103999] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:28.499 [2024-11-26 19:00:55.104330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:28.499 [2024-11-26 19:00:55.104581] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:28.499 [2024-11-26 19:00:55.104602] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:28.499 [2024-11-26 19:00:55.104824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.499 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.500 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.500 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.500 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.500 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.500 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.500 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.500 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.757 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.757 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.757 "name": "raid_bdev1", 00:11:28.757 "uuid": "a3c8785b-d344-4505-ac6c-761fbd71fe9c", 00:11:28.757 "strip_size_kb": 64, 00:11:28.757 "state": "online", 00:11:28.757 "raid_level": "raid0", 00:11:28.757 "superblock": true, 00:11:28.757 "num_base_bdevs": 4, 00:11:28.757 "num_base_bdevs_discovered": 4, 00:11:28.757 "num_base_bdevs_operational": 4, 00:11:28.757 "base_bdevs_list": [ 00:11:28.757 { 00:11:28.757 "name": "pt1", 00:11:28.757 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:28.757 "is_configured": true, 00:11:28.757 "data_offset": 2048, 00:11:28.757 "data_size": 63488 00:11:28.757 }, 00:11:28.757 { 00:11:28.757 "name": "pt2", 00:11:28.757 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:28.757 "is_configured": true, 00:11:28.757 "data_offset": 2048, 00:11:28.757 "data_size": 63488 00:11:28.757 }, 00:11:28.757 { 00:11:28.757 "name": "pt3", 00:11:28.757 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:28.757 "is_configured": true, 00:11:28.757 "data_offset": 2048, 00:11:28.757 "data_size": 63488 00:11:28.757 }, 00:11:28.757 { 00:11:28.757 "name": "pt4", 00:11:28.757 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:28.757 "is_configured": true, 00:11:28.757 "data_offset": 2048, 00:11:28.757 "data_size": 63488 00:11:28.757 } 00:11:28.757 ] 00:11:28.757 }' 00:11:28.757 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.757 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.015 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:29.015 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:29.015 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:29.015 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:29.015 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:29.015 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:29.015 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:29.015 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:29.015 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.015 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.015 [2024-11-26 19:00:55.593499] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.015 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.015 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:29.015 "name": "raid_bdev1", 00:11:29.015 "aliases": [ 00:11:29.015 "a3c8785b-d344-4505-ac6c-761fbd71fe9c" 00:11:29.015 ], 00:11:29.015 "product_name": "Raid Volume", 00:11:29.015 "block_size": 512, 00:11:29.015 "num_blocks": 253952, 00:11:29.015 "uuid": "a3c8785b-d344-4505-ac6c-761fbd71fe9c", 00:11:29.015 "assigned_rate_limits": { 00:11:29.015 "rw_ios_per_sec": 0, 00:11:29.015 "rw_mbytes_per_sec": 0, 00:11:29.015 "r_mbytes_per_sec": 0, 00:11:29.015 "w_mbytes_per_sec": 0 00:11:29.015 }, 00:11:29.015 "claimed": false, 00:11:29.015 "zoned": false, 00:11:29.015 "supported_io_types": { 00:11:29.015 "read": true, 00:11:29.015 "write": true, 00:11:29.015 "unmap": true, 00:11:29.015 "flush": true, 00:11:29.015 "reset": true, 00:11:29.015 "nvme_admin": false, 00:11:29.015 "nvme_io": false, 00:11:29.015 "nvme_io_md": false, 00:11:29.015 "write_zeroes": true, 00:11:29.015 "zcopy": false, 00:11:29.015 "get_zone_info": false, 00:11:29.015 "zone_management": false, 00:11:29.015 "zone_append": false, 00:11:29.015 "compare": false, 00:11:29.015 "compare_and_write": false, 00:11:29.015 "abort": false, 00:11:29.015 "seek_hole": false, 00:11:29.015 "seek_data": false, 00:11:29.015 "copy": false, 00:11:29.015 "nvme_iov_md": false 00:11:29.015 }, 00:11:29.015 "memory_domains": [ 00:11:29.015 { 00:11:29.015 "dma_device_id": "system", 00:11:29.015 "dma_device_type": 1 00:11:29.015 }, 00:11:29.015 { 00:11:29.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.015 "dma_device_type": 2 00:11:29.015 }, 00:11:29.015 { 00:11:29.015 "dma_device_id": "system", 00:11:29.015 "dma_device_type": 1 00:11:29.015 }, 00:11:29.015 { 00:11:29.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.015 "dma_device_type": 2 00:11:29.015 }, 00:11:29.015 { 00:11:29.015 "dma_device_id": "system", 00:11:29.015 "dma_device_type": 1 00:11:29.015 }, 00:11:29.015 { 00:11:29.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.015 "dma_device_type": 2 00:11:29.015 }, 00:11:29.015 { 00:11:29.015 "dma_device_id": "system", 00:11:29.015 "dma_device_type": 1 00:11:29.015 }, 00:11:29.015 { 00:11:29.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.015 "dma_device_type": 2 00:11:29.015 } 00:11:29.015 ], 00:11:29.015 "driver_specific": { 00:11:29.015 "raid": { 00:11:29.015 "uuid": "a3c8785b-d344-4505-ac6c-761fbd71fe9c", 00:11:29.015 "strip_size_kb": 64, 00:11:29.015 "state": "online", 00:11:29.015 "raid_level": "raid0", 00:11:29.015 "superblock": true, 00:11:29.015 "num_base_bdevs": 4, 00:11:29.015 "num_base_bdevs_discovered": 4, 00:11:29.015 "num_base_bdevs_operational": 4, 00:11:29.015 "base_bdevs_list": [ 00:11:29.015 { 00:11:29.015 "name": "pt1", 00:11:29.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:29.015 "is_configured": true, 00:11:29.015 "data_offset": 2048, 00:11:29.015 "data_size": 63488 00:11:29.015 }, 00:11:29.015 { 00:11:29.015 "name": "pt2", 00:11:29.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:29.015 "is_configured": true, 00:11:29.015 "data_offset": 2048, 00:11:29.015 "data_size": 63488 00:11:29.015 }, 00:11:29.015 { 00:11:29.015 "name": "pt3", 00:11:29.015 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:29.015 "is_configured": true, 00:11:29.015 "data_offset": 2048, 00:11:29.015 "data_size": 63488 00:11:29.015 }, 00:11:29.015 { 00:11:29.015 "name": "pt4", 00:11:29.015 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:29.015 "is_configured": true, 00:11:29.015 "data_offset": 2048, 00:11:29.015 "data_size": 63488 00:11:29.015 } 00:11:29.015 ] 00:11:29.015 } 00:11:29.015 } 00:11:29.015 }' 00:11:29.015 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:29.274 pt2 00:11:29.274 pt3 00:11:29.274 pt4' 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.274 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.532 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.532 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.532 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.532 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:29.532 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.532 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.532 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.532 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.532 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.532 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.532 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:29.532 19:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:29.532 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.532 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.532 [2024-11-26 19:00:55.965497] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.532 19:00:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a3c8785b-d344-4505-ac6c-761fbd71fe9c 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a3c8785b-d344-4505-ac6c-761fbd71fe9c ']' 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.532 [2024-11-26 19:00:56.025120] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:29.532 [2024-11-26 19:00:56.025153] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.532 [2024-11-26 19:00:56.025253] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.532 [2024-11-26 19:00:56.025366] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.532 [2024-11-26 19:00:56.025394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:29.532 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:29.533 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.533 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.791 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.791 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:29.791 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:29.791 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:29.791 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:29.791 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:29.791 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.791 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:29.791 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.791 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:29.791 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.791 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.791 [2024-11-26 19:00:56.185182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:29.791 [2024-11-26 19:00:56.187851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:29.791 [2024-11-26 19:00:56.187953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:29.791 [2024-11-26 19:00:56.188009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:29.791 [2024-11-26 19:00:56.188086] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:29.791 [2024-11-26 19:00:56.188167] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:29.791 [2024-11-26 19:00:56.188201] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:29.792 [2024-11-26 19:00:56.188232] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:29.792 [2024-11-26 19:00:56.188254] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:29.792 [2024-11-26 19:00:56.188275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:29.792 request: 00:11:29.792 { 00:11:29.792 "name": "raid_bdev1", 00:11:29.792 "raid_level": "raid0", 00:11:29.792 "base_bdevs": [ 00:11:29.792 "malloc1", 00:11:29.792 "malloc2", 00:11:29.792 "malloc3", 00:11:29.792 "malloc4" 00:11:29.792 ], 00:11:29.792 "strip_size_kb": 64, 00:11:29.792 "superblock": false, 00:11:29.792 "method": "bdev_raid_create", 00:11:29.792 "req_id": 1 00:11:29.792 } 00:11:29.792 Got JSON-RPC error response 00:11:29.792 response: 00:11:29.792 { 00:11:29.792 "code": -17, 00:11:29.792 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:29.792 } 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.792 [2024-11-26 19:00:56.253170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:29.792 [2024-11-26 19:00:56.253248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.792 [2024-11-26 19:00:56.253276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:29.792 [2024-11-26 19:00:56.253308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.792 [2024-11-26 19:00:56.256362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.792 [2024-11-26 19:00:56.256407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:29.792 [2024-11-26 19:00:56.256510] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:29.792 [2024-11-26 19:00:56.256607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:29.792 pt1 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.792 "name": "raid_bdev1", 00:11:29.792 "uuid": "a3c8785b-d344-4505-ac6c-761fbd71fe9c", 00:11:29.792 "strip_size_kb": 64, 00:11:29.792 "state": "configuring", 00:11:29.792 "raid_level": "raid0", 00:11:29.792 "superblock": true, 00:11:29.792 "num_base_bdevs": 4, 00:11:29.792 "num_base_bdevs_discovered": 1, 00:11:29.792 "num_base_bdevs_operational": 4, 00:11:29.792 "base_bdevs_list": [ 00:11:29.792 { 00:11:29.792 "name": "pt1", 00:11:29.792 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:29.792 "is_configured": true, 00:11:29.792 "data_offset": 2048, 00:11:29.792 "data_size": 63488 00:11:29.792 }, 00:11:29.792 { 00:11:29.792 "name": null, 00:11:29.792 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:29.792 "is_configured": false, 00:11:29.792 "data_offset": 2048, 00:11:29.792 "data_size": 63488 00:11:29.792 }, 00:11:29.792 { 00:11:29.792 "name": null, 00:11:29.792 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:29.792 "is_configured": false, 00:11:29.792 "data_offset": 2048, 00:11:29.792 "data_size": 63488 00:11:29.792 }, 00:11:29.792 { 00:11:29.792 "name": null, 00:11:29.792 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:29.792 "is_configured": false, 00:11:29.792 "data_offset": 2048, 00:11:29.792 "data_size": 63488 00:11:29.792 } 00:11:29.792 ] 00:11:29.792 }' 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.792 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.358 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:30.358 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:30.358 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.358 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.358 [2024-11-26 19:00:56.793409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:30.358 [2024-11-26 19:00:56.793504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.358 [2024-11-26 19:00:56.793534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:30.358 [2024-11-26 19:00:56.793552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.358 [2024-11-26 19:00:56.794161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.358 [2024-11-26 19:00:56.794199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:30.358 [2024-11-26 19:00:56.794325] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:30.358 [2024-11-26 19:00:56.794367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:30.358 pt2 00:11:30.358 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.358 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:30.358 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.358 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.358 [2024-11-26 19:00:56.801366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:30.358 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.358 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:30.359 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.359 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.359 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:30.359 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.359 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.359 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.359 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.359 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.359 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.359 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.359 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.359 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.359 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.359 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.359 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.359 "name": "raid_bdev1", 00:11:30.359 "uuid": "a3c8785b-d344-4505-ac6c-761fbd71fe9c", 00:11:30.359 "strip_size_kb": 64, 00:11:30.359 "state": "configuring", 00:11:30.359 "raid_level": "raid0", 00:11:30.359 "superblock": true, 00:11:30.359 "num_base_bdevs": 4, 00:11:30.359 "num_base_bdevs_discovered": 1, 00:11:30.359 "num_base_bdevs_operational": 4, 00:11:30.359 "base_bdevs_list": [ 00:11:30.359 { 00:11:30.359 "name": "pt1", 00:11:30.359 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:30.359 "is_configured": true, 00:11:30.359 "data_offset": 2048, 00:11:30.359 "data_size": 63488 00:11:30.359 }, 00:11:30.359 { 00:11:30.359 "name": null, 00:11:30.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:30.359 "is_configured": false, 00:11:30.359 "data_offset": 0, 00:11:30.359 "data_size": 63488 00:11:30.359 }, 00:11:30.359 { 00:11:30.359 "name": null, 00:11:30.359 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:30.359 "is_configured": false, 00:11:30.359 "data_offset": 2048, 00:11:30.359 "data_size": 63488 00:11:30.359 }, 00:11:30.359 { 00:11:30.359 "name": null, 00:11:30.359 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:30.359 "is_configured": false, 00:11:30.359 "data_offset": 2048, 00:11:30.359 "data_size": 63488 00:11:30.359 } 00:11:30.359 ] 00:11:30.359 }' 00:11:30.359 19:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.359 19:00:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.924 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:30.924 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.925 [2024-11-26 19:00:57.349528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:30.925 [2024-11-26 19:00:57.349611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.925 [2024-11-26 19:00:57.349659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:30.925 [2024-11-26 19:00:57.349673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.925 [2024-11-26 19:00:57.350253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.925 [2024-11-26 19:00:57.350277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:30.925 [2024-11-26 19:00:57.350434] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:30.925 [2024-11-26 19:00:57.350470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:30.925 pt2 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.925 [2024-11-26 19:00:57.361481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:30.925 [2024-11-26 19:00:57.361536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.925 [2024-11-26 19:00:57.361562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:30.925 [2024-11-26 19:00:57.361575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.925 [2024-11-26 19:00:57.362050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.925 [2024-11-26 19:00:57.362074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:30.925 [2024-11-26 19:00:57.362150] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:30.925 [2024-11-26 19:00:57.362185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:30.925 pt3 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.925 [2024-11-26 19:00:57.369464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:30.925 [2024-11-26 19:00:57.369513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.925 [2024-11-26 19:00:57.369538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:30.925 [2024-11-26 19:00:57.369551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.925 [2024-11-26 19:00:57.370060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.925 [2024-11-26 19:00:57.370091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:30.925 [2024-11-26 19:00:57.370170] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:30.925 [2024-11-26 19:00:57.370202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:30.925 [2024-11-26 19:00:57.370389] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:30.925 [2024-11-26 19:00:57.370406] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:30.925 [2024-11-26 19:00:57.370712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:30.925 [2024-11-26 19:00:57.370930] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:30.925 [2024-11-26 19:00:57.370952] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:30.925 [2024-11-26 19:00:57.371109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.925 pt4 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.925 "name": "raid_bdev1", 00:11:30.925 "uuid": "a3c8785b-d344-4505-ac6c-761fbd71fe9c", 00:11:30.925 "strip_size_kb": 64, 00:11:30.925 "state": "online", 00:11:30.925 "raid_level": "raid0", 00:11:30.925 "superblock": true, 00:11:30.925 "num_base_bdevs": 4, 00:11:30.925 "num_base_bdevs_discovered": 4, 00:11:30.925 "num_base_bdevs_operational": 4, 00:11:30.925 "base_bdevs_list": [ 00:11:30.925 { 00:11:30.925 "name": "pt1", 00:11:30.925 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:30.925 "is_configured": true, 00:11:30.925 "data_offset": 2048, 00:11:30.925 "data_size": 63488 00:11:30.925 }, 00:11:30.925 { 00:11:30.925 "name": "pt2", 00:11:30.925 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:30.925 "is_configured": true, 00:11:30.925 "data_offset": 2048, 00:11:30.925 "data_size": 63488 00:11:30.925 }, 00:11:30.925 { 00:11:30.925 "name": "pt3", 00:11:30.925 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:30.925 "is_configured": true, 00:11:30.925 "data_offset": 2048, 00:11:30.925 "data_size": 63488 00:11:30.925 }, 00:11:30.925 { 00:11:30.925 "name": "pt4", 00:11:30.925 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:30.925 "is_configured": true, 00:11:30.925 "data_offset": 2048, 00:11:30.925 "data_size": 63488 00:11:30.925 } 00:11:30.925 ] 00:11:30.925 }' 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.925 19:00:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.491 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:31.491 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:31.491 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:31.491 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:31.491 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:31.491 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:31.491 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:31.491 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:31.491 19:00:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.491 19:00:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.491 [2024-11-26 19:00:57.898090] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:31.491 19:00:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.491 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:31.491 "name": "raid_bdev1", 00:11:31.491 "aliases": [ 00:11:31.491 "a3c8785b-d344-4505-ac6c-761fbd71fe9c" 00:11:31.491 ], 00:11:31.491 "product_name": "Raid Volume", 00:11:31.491 "block_size": 512, 00:11:31.491 "num_blocks": 253952, 00:11:31.491 "uuid": "a3c8785b-d344-4505-ac6c-761fbd71fe9c", 00:11:31.491 "assigned_rate_limits": { 00:11:31.491 "rw_ios_per_sec": 0, 00:11:31.491 "rw_mbytes_per_sec": 0, 00:11:31.491 "r_mbytes_per_sec": 0, 00:11:31.491 "w_mbytes_per_sec": 0 00:11:31.491 }, 00:11:31.491 "claimed": false, 00:11:31.491 "zoned": false, 00:11:31.491 "supported_io_types": { 00:11:31.491 "read": true, 00:11:31.491 "write": true, 00:11:31.491 "unmap": true, 00:11:31.491 "flush": true, 00:11:31.491 "reset": true, 00:11:31.491 "nvme_admin": false, 00:11:31.491 "nvme_io": false, 00:11:31.491 "nvme_io_md": false, 00:11:31.491 "write_zeroes": true, 00:11:31.491 "zcopy": false, 00:11:31.491 "get_zone_info": false, 00:11:31.491 "zone_management": false, 00:11:31.491 "zone_append": false, 00:11:31.491 "compare": false, 00:11:31.491 "compare_and_write": false, 00:11:31.491 "abort": false, 00:11:31.491 "seek_hole": false, 00:11:31.491 "seek_data": false, 00:11:31.491 "copy": false, 00:11:31.491 "nvme_iov_md": false 00:11:31.491 }, 00:11:31.491 "memory_domains": [ 00:11:31.491 { 00:11:31.491 "dma_device_id": "system", 00:11:31.491 "dma_device_type": 1 00:11:31.491 }, 00:11:31.491 { 00:11:31.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.491 "dma_device_type": 2 00:11:31.491 }, 00:11:31.491 { 00:11:31.491 "dma_device_id": "system", 00:11:31.491 "dma_device_type": 1 00:11:31.491 }, 00:11:31.491 { 00:11:31.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.491 "dma_device_type": 2 00:11:31.491 }, 00:11:31.491 { 00:11:31.491 "dma_device_id": "system", 00:11:31.491 "dma_device_type": 1 00:11:31.491 }, 00:11:31.491 { 00:11:31.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.491 "dma_device_type": 2 00:11:31.491 }, 00:11:31.491 { 00:11:31.491 "dma_device_id": "system", 00:11:31.491 "dma_device_type": 1 00:11:31.491 }, 00:11:31.491 { 00:11:31.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.491 "dma_device_type": 2 00:11:31.491 } 00:11:31.491 ], 00:11:31.491 "driver_specific": { 00:11:31.491 "raid": { 00:11:31.491 "uuid": "a3c8785b-d344-4505-ac6c-761fbd71fe9c", 00:11:31.491 "strip_size_kb": 64, 00:11:31.491 "state": "online", 00:11:31.491 "raid_level": "raid0", 00:11:31.491 "superblock": true, 00:11:31.491 "num_base_bdevs": 4, 00:11:31.491 "num_base_bdevs_discovered": 4, 00:11:31.491 "num_base_bdevs_operational": 4, 00:11:31.491 "base_bdevs_list": [ 00:11:31.491 { 00:11:31.491 "name": "pt1", 00:11:31.491 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:31.491 "is_configured": true, 00:11:31.491 "data_offset": 2048, 00:11:31.491 "data_size": 63488 00:11:31.491 }, 00:11:31.491 { 00:11:31.491 "name": "pt2", 00:11:31.491 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:31.491 "is_configured": true, 00:11:31.491 "data_offset": 2048, 00:11:31.491 "data_size": 63488 00:11:31.491 }, 00:11:31.491 { 00:11:31.491 "name": "pt3", 00:11:31.491 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:31.491 "is_configured": true, 00:11:31.491 "data_offset": 2048, 00:11:31.491 "data_size": 63488 00:11:31.491 }, 00:11:31.491 { 00:11:31.491 "name": "pt4", 00:11:31.491 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:31.491 "is_configured": true, 00:11:31.491 "data_offset": 2048, 00:11:31.491 "data_size": 63488 00:11:31.491 } 00:11:31.491 ] 00:11:31.491 } 00:11:31.491 } 00:11:31.491 }' 00:11:31.491 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:31.491 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:31.491 pt2 00:11:31.491 pt3 00:11:31.492 pt4' 00:11:31.492 19:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.492 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:31.492 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.492 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:31.492 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.492 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.492 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.492 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.492 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.492 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.492 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.492 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:31.492 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.492 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.492 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.749 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.749 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.749 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.749 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.749 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:31.750 [2024-11-26 19:00:58.262102] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a3c8785b-d344-4505-ac6c-761fbd71fe9c '!=' a3c8785b-d344-4505-ac6c-761fbd71fe9c ']' 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71208 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71208 ']' 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71208 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71208 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.750 killing process with pid 71208 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71208' 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 71208 00:11:31.750 [2024-11-26 19:00:58.334861] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:31.750 19:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 71208 00:11:31.750 [2024-11-26 19:00:58.334962] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:31.750 [2024-11-26 19:00:58.335063] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:31.750 [2024-11-26 19:00:58.335079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:32.316 [2024-11-26 19:00:58.697544] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:33.264 19:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:33.264 00:11:33.264 real 0m6.019s 00:11:33.264 user 0m8.970s 00:11:33.264 sys 0m0.876s 00:11:33.264 19:00:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.264 ************************************ 00:11:33.264 END TEST raid_superblock_test 00:11:33.264 ************************************ 00:11:33.264 19:00:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.541 19:00:59 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:33.541 19:00:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:33.541 19:00:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.541 19:00:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:33.541 ************************************ 00:11:33.541 START TEST raid_read_error_test 00:11:33.541 ************************************ 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.abGmNnE1Cx 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71474 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71474 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71474 ']' 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.541 19:00:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.541 [2024-11-26 19:01:00.060241] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:11:33.541 [2024-11-26 19:01:00.060437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71474 ] 00:11:33.799 [2024-11-26 19:01:00.255891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.057 [2024-11-26 19:01:00.459536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.316 [2024-11-26 19:01:00.713651] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.316 [2024-11-26 19:01:00.713723] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.574 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:34.575 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:34.575 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:34.575 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:34.575 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.575 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.575 BaseBdev1_malloc 00:11:34.575 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.575 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:34.575 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.575 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.575 true 00:11:34.575 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.575 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:34.575 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.575 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.575 [2024-11-26 19:01:01.139991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:34.575 [2024-11-26 19:01:01.140070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.575 [2024-11-26 19:01:01.140100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:34.575 [2024-11-26 19:01:01.140118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.575 [2024-11-26 19:01:01.143115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.575 [2024-11-26 19:01:01.143177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:34.575 BaseBdev1 00:11:34.575 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.575 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:34.575 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:34.575 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.575 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.575 BaseBdev2_malloc 00:11:34.575 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.575 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:34.575 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.575 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.834 true 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.834 [2024-11-26 19:01:01.200967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:34.834 [2024-11-26 19:01:01.201045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.834 [2024-11-26 19:01:01.201073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:34.834 [2024-11-26 19:01:01.201098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.834 [2024-11-26 19:01:01.204256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.834 [2024-11-26 19:01:01.204318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:34.834 BaseBdev2 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.834 BaseBdev3_malloc 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.834 true 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.834 [2024-11-26 19:01:01.269212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:34.834 [2024-11-26 19:01:01.269275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.834 [2024-11-26 19:01:01.269318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:34.834 [2024-11-26 19:01:01.269337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.834 [2024-11-26 19:01:01.272243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.834 [2024-11-26 19:01:01.272316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:34.834 BaseBdev3 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.834 BaseBdev4_malloc 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.834 true 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.834 [2024-11-26 19:01:01.330335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:34.834 [2024-11-26 19:01:01.330419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.834 [2024-11-26 19:01:01.330447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:34.834 [2024-11-26 19:01:01.330465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.834 [2024-11-26 19:01:01.333420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.834 [2024-11-26 19:01:01.333470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:34.834 BaseBdev4 00:11:34.834 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.835 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:34.835 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.835 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.835 [2024-11-26 19:01:01.338442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.835 [2024-11-26 19:01:01.340995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:34.835 [2024-11-26 19:01:01.341148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:34.835 [2024-11-26 19:01:01.341249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:34.835 [2024-11-26 19:01:01.341575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:34.835 [2024-11-26 19:01:01.341601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:34.835 [2024-11-26 19:01:01.341911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:34.835 [2024-11-26 19:01:01.342128] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:34.835 [2024-11-26 19:01:01.342146] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:34.835 [2024-11-26 19:01:01.342411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.835 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.835 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:34.835 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.835 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.835 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:34.835 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.835 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.835 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.835 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.835 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.835 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.835 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.835 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.835 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.835 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.835 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.835 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.835 "name": "raid_bdev1", 00:11:34.835 "uuid": "6b695242-4d27-403c-9cf3-bacc608b1a81", 00:11:34.835 "strip_size_kb": 64, 00:11:34.835 "state": "online", 00:11:34.835 "raid_level": "raid0", 00:11:34.835 "superblock": true, 00:11:34.835 "num_base_bdevs": 4, 00:11:34.835 "num_base_bdevs_discovered": 4, 00:11:34.835 "num_base_bdevs_operational": 4, 00:11:34.835 "base_bdevs_list": [ 00:11:34.835 { 00:11:34.835 "name": "BaseBdev1", 00:11:34.835 "uuid": "874f9760-bef5-5930-a701-d7e54faa4c60", 00:11:34.835 "is_configured": true, 00:11:34.835 "data_offset": 2048, 00:11:34.835 "data_size": 63488 00:11:34.835 }, 00:11:34.835 { 00:11:34.835 "name": "BaseBdev2", 00:11:34.835 "uuid": "8e1852d7-58b4-5315-98d7-31515b4a9ce0", 00:11:34.835 "is_configured": true, 00:11:34.835 "data_offset": 2048, 00:11:34.835 "data_size": 63488 00:11:34.835 }, 00:11:34.835 { 00:11:34.835 "name": "BaseBdev3", 00:11:34.835 "uuid": "55a1c964-f087-5be4-acbc-43a88c93a165", 00:11:34.835 "is_configured": true, 00:11:34.835 "data_offset": 2048, 00:11:34.835 "data_size": 63488 00:11:34.835 }, 00:11:34.835 { 00:11:34.835 "name": "BaseBdev4", 00:11:34.835 "uuid": "55958a1b-6f89-5752-bf73-4153d49b6a88", 00:11:34.835 "is_configured": true, 00:11:34.835 "data_offset": 2048, 00:11:34.835 "data_size": 63488 00:11:34.835 } 00:11:34.835 ] 00:11:34.835 }' 00:11:34.835 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.835 19:01:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.401 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:35.401 19:01:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:35.401 [2024-11-26 19:01:02.016272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:36.336 19:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:36.336 19:01:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.336 19:01:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.336 19:01:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.336 19:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:36.336 19:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:36.336 19:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:36.336 19:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:36.336 19:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.336 19:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.336 19:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:36.336 19:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.336 19:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.336 19:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.336 19:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.336 19:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.336 19:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.336 19:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.336 19:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.336 19:01:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.336 19:01:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.336 19:01:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.594 19:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.594 "name": "raid_bdev1", 00:11:36.594 "uuid": "6b695242-4d27-403c-9cf3-bacc608b1a81", 00:11:36.594 "strip_size_kb": 64, 00:11:36.594 "state": "online", 00:11:36.594 "raid_level": "raid0", 00:11:36.594 "superblock": true, 00:11:36.594 "num_base_bdevs": 4, 00:11:36.594 "num_base_bdevs_discovered": 4, 00:11:36.594 "num_base_bdevs_operational": 4, 00:11:36.594 "base_bdevs_list": [ 00:11:36.594 { 00:11:36.594 "name": "BaseBdev1", 00:11:36.594 "uuid": "874f9760-bef5-5930-a701-d7e54faa4c60", 00:11:36.594 "is_configured": true, 00:11:36.594 "data_offset": 2048, 00:11:36.594 "data_size": 63488 00:11:36.594 }, 00:11:36.594 { 00:11:36.594 "name": "BaseBdev2", 00:11:36.594 "uuid": "8e1852d7-58b4-5315-98d7-31515b4a9ce0", 00:11:36.594 "is_configured": true, 00:11:36.594 "data_offset": 2048, 00:11:36.594 "data_size": 63488 00:11:36.594 }, 00:11:36.594 { 00:11:36.594 "name": "BaseBdev3", 00:11:36.594 "uuid": "55a1c964-f087-5be4-acbc-43a88c93a165", 00:11:36.594 "is_configured": true, 00:11:36.594 "data_offset": 2048, 00:11:36.594 "data_size": 63488 00:11:36.594 }, 00:11:36.594 { 00:11:36.594 "name": "BaseBdev4", 00:11:36.594 "uuid": "55958a1b-6f89-5752-bf73-4153d49b6a88", 00:11:36.594 "is_configured": true, 00:11:36.594 "data_offset": 2048, 00:11:36.594 "data_size": 63488 00:11:36.594 } 00:11:36.594 ] 00:11:36.594 }' 00:11:36.594 19:01:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.594 19:01:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.853 19:01:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:36.853 19:01:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.853 19:01:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.853 [2024-11-26 19:01:03.450742] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:36.853 [2024-11-26 19:01:03.450782] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.853 [2024-11-26 19:01:03.454227] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.853 [2024-11-26 19:01:03.454316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.853 [2024-11-26 19:01:03.454552] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.853 [2024-11-26 19:01:03.454736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:36.853 { 00:11:36.853 "results": [ 00:11:36.853 { 00:11:36.853 "job": "raid_bdev1", 00:11:36.853 "core_mask": "0x1", 00:11:36.853 "workload": "randrw", 00:11:36.853 "percentage": 50, 00:11:36.853 "status": "finished", 00:11:36.853 "queue_depth": 1, 00:11:36.853 "io_size": 131072, 00:11:36.853 "runtime": 1.431952, 00:11:36.853 "iops": 9668.620177212644, 00:11:36.853 "mibps": 1208.5775221515805, 00:11:36.853 "io_failed": 1, 00:11:36.853 "io_timeout": 0, 00:11:36.853 "avg_latency_us": 145.16327498588367, 00:11:36.853 "min_latency_us": 39.79636363636364, 00:11:36.853 "max_latency_us": 1869.2654545454545 00:11:36.853 } 00:11:36.853 ], 00:11:36.853 "core_count": 1 00:11:36.853 } 00:11:36.853 19:01:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.853 19:01:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71474 00:11:36.853 19:01:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71474 ']' 00:11:36.853 19:01:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71474 00:11:36.853 19:01:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:36.853 19:01:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:36.853 19:01:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71474 00:11:37.111 19:01:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:37.111 19:01:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:37.111 19:01:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71474' 00:11:37.111 killing process with pid 71474 00:11:37.111 19:01:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71474 00:11:37.111 [2024-11-26 19:01:03.490975] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:37.111 19:01:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71474 00:11:37.370 [2024-11-26 19:01:03.794442] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:38.800 19:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.abGmNnE1Cx 00:11:38.800 19:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:38.800 19:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:38.800 19:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:11:38.800 19:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:38.800 19:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:38.800 19:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:38.800 19:01:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:11:38.800 00:11:38.800 real 0m5.128s 00:11:38.800 user 0m6.245s 00:11:38.800 sys 0m0.747s 00:11:38.800 19:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.800 19:01:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.800 ************************************ 00:11:38.800 END TEST raid_read_error_test 00:11:38.801 ************************************ 00:11:38.801 19:01:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:38.801 19:01:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:38.801 19:01:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.801 19:01:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:38.801 ************************************ 00:11:38.801 START TEST raid_write_error_test 00:11:38.801 ************************************ 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.AqpA1grWKV 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71625 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71625 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71625 ']' 00:11:38.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.801 19:01:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.801 [2024-11-26 19:01:05.209254] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:11:38.801 [2024-11-26 19:01:05.209689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71625 ] 00:11:38.801 [2024-11-26 19:01:05.398793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.060 [2024-11-26 19:01:05.550023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.319 [2024-11-26 19:01:05.776698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.319 [2024-11-26 19:01:05.777051] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.886 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.886 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:39.886 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:39.886 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:39.886 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.886 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.886 BaseBdev1_malloc 00:11:39.886 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.886 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:39.886 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.886 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.886 true 00:11:39.886 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.886 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:39.886 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.886 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.886 [2024-11-26 19:01:06.316636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:39.886 [2024-11-26 19:01:06.316726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.886 [2024-11-26 19:01:06.316760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:39.886 [2024-11-26 19:01:06.316779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.886 [2024-11-26 19:01:06.319707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.886 [2024-11-26 19:01:06.319759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:39.886 BaseBdev1 00:11:39.886 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.886 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:39.886 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:39.886 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.886 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.886 BaseBdev2_malloc 00:11:39.886 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.887 true 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.887 [2024-11-26 19:01:06.384737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:39.887 [2024-11-26 19:01:06.384943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.887 [2024-11-26 19:01:06.385013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:39.887 [2024-11-26 19:01:06.385255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.887 [2024-11-26 19:01:06.388239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.887 [2024-11-26 19:01:06.388417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:39.887 BaseBdev2 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.887 BaseBdev3_malloc 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.887 true 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.887 [2024-11-26 19:01:06.461607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:39.887 [2024-11-26 19:01:06.461846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.887 [2024-11-26 19:01:06.461884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:39.887 [2024-11-26 19:01:06.461904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.887 [2024-11-26 19:01:06.464890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.887 [2024-11-26 19:01:06.465103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:39.887 BaseBdev3 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.887 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.146 BaseBdev4_malloc 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.146 true 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.146 [2024-11-26 19:01:06.534531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:40.146 [2024-11-26 19:01:06.534602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.146 [2024-11-26 19:01:06.534632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:40.146 [2024-11-26 19:01:06.534651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.146 [2024-11-26 19:01:06.537639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.146 [2024-11-26 19:01:06.537845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:40.146 BaseBdev4 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.146 [2024-11-26 19:01:06.546748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:40.146 [2024-11-26 19:01:06.549341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.146 [2024-11-26 19:01:06.549451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.146 [2024-11-26 19:01:06.549550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:40.146 [2024-11-26 19:01:06.549897] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:40.146 [2024-11-26 19:01:06.549946] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:40.146 [2024-11-26 19:01:06.550271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:40.146 [2024-11-26 19:01:06.550562] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:40.146 [2024-11-26 19:01:06.550591] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:40.146 [2024-11-26 19:01:06.550886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.146 "name": "raid_bdev1", 00:11:40.146 "uuid": "529d64c4-043e-4ab0-9b8e-11ffbc905ec0", 00:11:40.146 "strip_size_kb": 64, 00:11:40.146 "state": "online", 00:11:40.146 "raid_level": "raid0", 00:11:40.146 "superblock": true, 00:11:40.146 "num_base_bdevs": 4, 00:11:40.146 "num_base_bdevs_discovered": 4, 00:11:40.146 "num_base_bdevs_operational": 4, 00:11:40.146 "base_bdevs_list": [ 00:11:40.146 { 00:11:40.146 "name": "BaseBdev1", 00:11:40.146 "uuid": "0c93c26c-e705-590e-be40-ff692fc14e89", 00:11:40.146 "is_configured": true, 00:11:40.146 "data_offset": 2048, 00:11:40.146 "data_size": 63488 00:11:40.146 }, 00:11:40.146 { 00:11:40.146 "name": "BaseBdev2", 00:11:40.146 "uuid": "c7f9f241-bc12-5a57-a271-195ddda69a71", 00:11:40.146 "is_configured": true, 00:11:40.146 "data_offset": 2048, 00:11:40.146 "data_size": 63488 00:11:40.146 }, 00:11:40.146 { 00:11:40.146 "name": "BaseBdev3", 00:11:40.146 "uuid": "4e7b82a6-837d-582f-8d1e-dfd49d9d10db", 00:11:40.146 "is_configured": true, 00:11:40.146 "data_offset": 2048, 00:11:40.146 "data_size": 63488 00:11:40.146 }, 00:11:40.146 { 00:11:40.146 "name": "BaseBdev4", 00:11:40.146 "uuid": "da07c91c-44e6-5a6f-a49c-797e528a097d", 00:11:40.146 "is_configured": true, 00:11:40.146 "data_offset": 2048, 00:11:40.146 "data_size": 63488 00:11:40.146 } 00:11:40.146 ] 00:11:40.146 }' 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.146 19:01:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.714 19:01:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:40.714 19:01:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:40.714 [2024-11-26 19:01:07.164522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.650 "name": "raid_bdev1", 00:11:41.650 "uuid": "529d64c4-043e-4ab0-9b8e-11ffbc905ec0", 00:11:41.650 "strip_size_kb": 64, 00:11:41.650 "state": "online", 00:11:41.650 "raid_level": "raid0", 00:11:41.650 "superblock": true, 00:11:41.650 "num_base_bdevs": 4, 00:11:41.650 "num_base_bdevs_discovered": 4, 00:11:41.650 "num_base_bdevs_operational": 4, 00:11:41.650 "base_bdevs_list": [ 00:11:41.650 { 00:11:41.650 "name": "BaseBdev1", 00:11:41.650 "uuid": "0c93c26c-e705-590e-be40-ff692fc14e89", 00:11:41.650 "is_configured": true, 00:11:41.650 "data_offset": 2048, 00:11:41.650 "data_size": 63488 00:11:41.650 }, 00:11:41.650 { 00:11:41.650 "name": "BaseBdev2", 00:11:41.650 "uuid": "c7f9f241-bc12-5a57-a271-195ddda69a71", 00:11:41.650 "is_configured": true, 00:11:41.650 "data_offset": 2048, 00:11:41.650 "data_size": 63488 00:11:41.650 }, 00:11:41.650 { 00:11:41.650 "name": "BaseBdev3", 00:11:41.650 "uuid": "4e7b82a6-837d-582f-8d1e-dfd49d9d10db", 00:11:41.650 "is_configured": true, 00:11:41.650 "data_offset": 2048, 00:11:41.650 "data_size": 63488 00:11:41.650 }, 00:11:41.650 { 00:11:41.650 "name": "BaseBdev4", 00:11:41.650 "uuid": "da07c91c-44e6-5a6f-a49c-797e528a097d", 00:11:41.650 "is_configured": true, 00:11:41.650 "data_offset": 2048, 00:11:41.650 "data_size": 63488 00:11:41.650 } 00:11:41.650 ] 00:11:41.650 }' 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.650 19:01:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.217 19:01:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:42.217 19:01:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.217 19:01:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.217 [2024-11-26 19:01:08.617822] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:42.217 [2024-11-26 19:01:08.618002] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.217 [2024-11-26 19:01:08.621628] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.217 [2024-11-26 19:01:08.621839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.217 [2024-11-26 19:01:08.621952] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.217 [2024-11-26 19:01:08.622149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:42.217 { 00:11:42.217 "results": [ 00:11:42.217 { 00:11:42.217 "job": "raid_bdev1", 00:11:42.217 "core_mask": "0x1", 00:11:42.217 "workload": "randrw", 00:11:42.217 "percentage": 50, 00:11:42.217 "status": "finished", 00:11:42.217 "queue_depth": 1, 00:11:42.217 "io_size": 131072, 00:11:42.217 "runtime": 1.450887, 00:11:42.217 "iops": 9662.365160071045, 00:11:42.217 "mibps": 1207.7956450088807, 00:11:42.217 "io_failed": 1, 00:11:42.217 "io_timeout": 0, 00:11:42.217 "avg_latency_us": 145.39897808325767, 00:11:42.217 "min_latency_us": 42.35636363636364, 00:11:42.217 "max_latency_us": 1839.4763636363637 00:11:42.217 } 00:11:42.217 ], 00:11:42.217 "core_count": 1 00:11:42.217 } 00:11:42.217 19:01:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.217 19:01:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71625 00:11:42.217 19:01:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71625 ']' 00:11:42.217 19:01:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71625 00:11:42.217 19:01:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:42.217 19:01:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.217 19:01:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71625 00:11:42.217 killing process with pid 71625 00:11:42.217 19:01:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.217 19:01:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.217 19:01:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71625' 00:11:42.217 19:01:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71625 00:11:42.217 [2024-11-26 19:01:08.654904] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.217 19:01:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71625 00:11:42.476 [2024-11-26 19:01:08.968344] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:43.858 19:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.AqpA1grWKV 00:11:43.858 19:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:43.858 19:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:43.858 19:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:11:43.858 19:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:43.858 ************************************ 00:11:43.858 END TEST raid_write_error_test 00:11:43.858 ************************************ 00:11:43.858 19:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:43.858 19:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:43.858 19:01:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:11:43.858 00:11:43.858 real 0m5.131s 00:11:43.858 user 0m6.230s 00:11:43.858 sys 0m0.702s 00:11:43.858 19:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.858 19:01:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.858 19:01:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:43.858 19:01:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:43.858 19:01:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:43.858 19:01:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.858 19:01:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:43.858 ************************************ 00:11:43.858 START TEST raid_state_function_test 00:11:43.858 ************************************ 00:11:43.858 19:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:11:43.858 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:43.858 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:43.858 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:43.858 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:43.858 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:43.858 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.858 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:43.858 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.858 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.858 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:43.858 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.858 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.858 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:43.858 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.858 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.858 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:43.858 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.859 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.859 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:43.859 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:43.859 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:43.859 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:43.859 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:43.859 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:43.859 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:43.859 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:43.859 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:43.859 Process raid pid: 71769 00:11:43.859 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:43.859 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:43.859 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71769 00:11:43.859 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71769' 00:11:43.859 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71769 00:11:43.859 19:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:43.859 19:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71769 ']' 00:11:43.859 19:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.859 19:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.859 19:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.859 19:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.859 19:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.859 [2024-11-26 19:01:10.393754] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:11:43.859 [2024-11-26 19:01:10.394213] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.118 [2024-11-26 19:01:10.585850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.376 [2024-11-26 19:01:10.740849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.376 [2024-11-26 19:01:10.984450] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.376 [2024-11-26 19:01:10.984502] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.944 19:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.944 19:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:44.944 19:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:44.944 19:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.944 19:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.944 [2024-11-26 19:01:11.411417] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:44.944 [2024-11-26 19:01:11.411502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:44.944 [2024-11-26 19:01:11.411521] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:44.944 [2024-11-26 19:01:11.411538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:44.944 [2024-11-26 19:01:11.411548] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:44.944 [2024-11-26 19:01:11.411563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:44.944 [2024-11-26 19:01:11.411573] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:44.944 [2024-11-26 19:01:11.411588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:44.944 19:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.944 19:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:44.944 19:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.944 19:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.944 19:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.944 19:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.944 19:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.944 19:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.944 19:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.944 19:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.944 19:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.944 19:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.944 19:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.944 19:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.944 19:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.944 19:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.945 19:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.945 "name": "Existed_Raid", 00:11:44.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.945 "strip_size_kb": 64, 00:11:44.945 "state": "configuring", 00:11:44.945 "raid_level": "concat", 00:11:44.945 "superblock": false, 00:11:44.945 "num_base_bdevs": 4, 00:11:44.945 "num_base_bdevs_discovered": 0, 00:11:44.945 "num_base_bdevs_operational": 4, 00:11:44.945 "base_bdevs_list": [ 00:11:44.945 { 00:11:44.945 "name": "BaseBdev1", 00:11:44.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.945 "is_configured": false, 00:11:44.945 "data_offset": 0, 00:11:44.945 "data_size": 0 00:11:44.945 }, 00:11:44.945 { 00:11:44.945 "name": "BaseBdev2", 00:11:44.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.945 "is_configured": false, 00:11:44.945 "data_offset": 0, 00:11:44.945 "data_size": 0 00:11:44.945 }, 00:11:44.945 { 00:11:44.945 "name": "BaseBdev3", 00:11:44.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.945 "is_configured": false, 00:11:44.945 "data_offset": 0, 00:11:44.945 "data_size": 0 00:11:44.945 }, 00:11:44.945 { 00:11:44.945 "name": "BaseBdev4", 00:11:44.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.945 "is_configured": false, 00:11:44.945 "data_offset": 0, 00:11:44.945 "data_size": 0 00:11:44.945 } 00:11:44.945 ] 00:11:44.945 }' 00:11:44.945 19:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.945 19:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.514 19:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:45.514 19:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.514 19:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.514 [2024-11-26 19:01:11.947481] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:45.514 [2024-11-26 19:01:11.947536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:45.514 19:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.514 19:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:45.514 19:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.514 19:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.514 [2024-11-26 19:01:11.959476] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:45.514 [2024-11-26 19:01:11.959654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:45.514 [2024-11-26 19:01:11.959793] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:45.514 [2024-11-26 19:01:11.959853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:45.514 [2024-11-26 19:01:11.959958] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:45.514 [2024-11-26 19:01:11.960015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:45.514 [2024-11-26 19:01:11.960144] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:45.514 [2024-11-26 19:01:11.960204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:45.514 19:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.514 19:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:45.514 19:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.514 19:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.514 [2024-11-26 19:01:12.009368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.514 BaseBdev1 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.514 [ 00:11:45.514 { 00:11:45.514 "name": "BaseBdev1", 00:11:45.514 "aliases": [ 00:11:45.514 "8fe0ba20-784c-405e-b29e-d6746d153e04" 00:11:45.514 ], 00:11:45.514 "product_name": "Malloc disk", 00:11:45.514 "block_size": 512, 00:11:45.514 "num_blocks": 65536, 00:11:45.514 "uuid": "8fe0ba20-784c-405e-b29e-d6746d153e04", 00:11:45.514 "assigned_rate_limits": { 00:11:45.514 "rw_ios_per_sec": 0, 00:11:45.514 "rw_mbytes_per_sec": 0, 00:11:45.514 "r_mbytes_per_sec": 0, 00:11:45.514 "w_mbytes_per_sec": 0 00:11:45.514 }, 00:11:45.514 "claimed": true, 00:11:45.514 "claim_type": "exclusive_write", 00:11:45.514 "zoned": false, 00:11:45.514 "supported_io_types": { 00:11:45.514 "read": true, 00:11:45.514 "write": true, 00:11:45.514 "unmap": true, 00:11:45.514 "flush": true, 00:11:45.514 "reset": true, 00:11:45.514 "nvme_admin": false, 00:11:45.514 "nvme_io": false, 00:11:45.514 "nvme_io_md": false, 00:11:45.514 "write_zeroes": true, 00:11:45.514 "zcopy": true, 00:11:45.514 "get_zone_info": false, 00:11:45.514 "zone_management": false, 00:11:45.514 "zone_append": false, 00:11:45.514 "compare": false, 00:11:45.514 "compare_and_write": false, 00:11:45.514 "abort": true, 00:11:45.514 "seek_hole": false, 00:11:45.514 "seek_data": false, 00:11:45.514 "copy": true, 00:11:45.514 "nvme_iov_md": false 00:11:45.514 }, 00:11:45.514 "memory_domains": [ 00:11:45.514 { 00:11:45.514 "dma_device_id": "system", 00:11:45.514 "dma_device_type": 1 00:11:45.514 }, 00:11:45.514 { 00:11:45.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.514 "dma_device_type": 2 00:11:45.514 } 00:11:45.514 ], 00:11:45.514 "driver_specific": {} 00:11:45.514 } 00:11:45.514 ] 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.514 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.515 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.515 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.515 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.515 "name": "Existed_Raid", 00:11:45.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.515 "strip_size_kb": 64, 00:11:45.515 "state": "configuring", 00:11:45.515 "raid_level": "concat", 00:11:45.515 "superblock": false, 00:11:45.515 "num_base_bdevs": 4, 00:11:45.515 "num_base_bdevs_discovered": 1, 00:11:45.515 "num_base_bdevs_operational": 4, 00:11:45.515 "base_bdevs_list": [ 00:11:45.515 { 00:11:45.515 "name": "BaseBdev1", 00:11:45.515 "uuid": "8fe0ba20-784c-405e-b29e-d6746d153e04", 00:11:45.515 "is_configured": true, 00:11:45.515 "data_offset": 0, 00:11:45.515 "data_size": 65536 00:11:45.515 }, 00:11:45.515 { 00:11:45.515 "name": "BaseBdev2", 00:11:45.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.515 "is_configured": false, 00:11:45.515 "data_offset": 0, 00:11:45.515 "data_size": 0 00:11:45.515 }, 00:11:45.515 { 00:11:45.515 "name": "BaseBdev3", 00:11:45.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.515 "is_configured": false, 00:11:45.515 "data_offset": 0, 00:11:45.515 "data_size": 0 00:11:45.515 }, 00:11:45.515 { 00:11:45.515 "name": "BaseBdev4", 00:11:45.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.515 "is_configured": false, 00:11:45.515 "data_offset": 0, 00:11:45.515 "data_size": 0 00:11:45.515 } 00:11:45.515 ] 00:11:45.515 }' 00:11:45.515 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.515 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.083 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:46.083 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.083 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.083 [2024-11-26 19:01:12.553647] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:46.083 [2024-11-26 19:01:12.553906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:46.083 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.083 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:46.083 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.083 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.083 [2024-11-26 19:01:12.561704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.083 [2024-11-26 19:01:12.564391] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:46.083 [2024-11-26 19:01:12.564460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:46.083 [2024-11-26 19:01:12.564478] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:46.083 [2024-11-26 19:01:12.564496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:46.083 [2024-11-26 19:01:12.564506] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:46.083 [2024-11-26 19:01:12.564520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:46.083 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.083 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:46.083 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:46.084 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:46.084 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.084 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.084 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.084 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.084 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.084 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.084 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.084 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.084 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.084 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.084 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.084 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.084 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.084 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.084 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.084 "name": "Existed_Raid", 00:11:46.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.084 "strip_size_kb": 64, 00:11:46.084 "state": "configuring", 00:11:46.084 "raid_level": "concat", 00:11:46.084 "superblock": false, 00:11:46.084 "num_base_bdevs": 4, 00:11:46.084 "num_base_bdevs_discovered": 1, 00:11:46.084 "num_base_bdevs_operational": 4, 00:11:46.084 "base_bdevs_list": [ 00:11:46.084 { 00:11:46.084 "name": "BaseBdev1", 00:11:46.084 "uuid": "8fe0ba20-784c-405e-b29e-d6746d153e04", 00:11:46.084 "is_configured": true, 00:11:46.084 "data_offset": 0, 00:11:46.084 "data_size": 65536 00:11:46.084 }, 00:11:46.084 { 00:11:46.084 "name": "BaseBdev2", 00:11:46.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.084 "is_configured": false, 00:11:46.084 "data_offset": 0, 00:11:46.084 "data_size": 0 00:11:46.084 }, 00:11:46.084 { 00:11:46.084 "name": "BaseBdev3", 00:11:46.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.084 "is_configured": false, 00:11:46.084 "data_offset": 0, 00:11:46.084 "data_size": 0 00:11:46.084 }, 00:11:46.084 { 00:11:46.084 "name": "BaseBdev4", 00:11:46.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.084 "is_configured": false, 00:11:46.084 "data_offset": 0, 00:11:46.084 "data_size": 0 00:11:46.084 } 00:11:46.084 ] 00:11:46.084 }' 00:11:46.084 19:01:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.084 19:01:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.652 [2024-11-26 19:01:13.121686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.652 BaseBdev2 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.652 [ 00:11:46.652 { 00:11:46.652 "name": "BaseBdev2", 00:11:46.652 "aliases": [ 00:11:46.652 "dc0aa864-754b-4923-badf-ecff0c70b49a" 00:11:46.652 ], 00:11:46.652 "product_name": "Malloc disk", 00:11:46.652 "block_size": 512, 00:11:46.652 "num_blocks": 65536, 00:11:46.652 "uuid": "dc0aa864-754b-4923-badf-ecff0c70b49a", 00:11:46.652 "assigned_rate_limits": { 00:11:46.652 "rw_ios_per_sec": 0, 00:11:46.652 "rw_mbytes_per_sec": 0, 00:11:46.652 "r_mbytes_per_sec": 0, 00:11:46.652 "w_mbytes_per_sec": 0 00:11:46.652 }, 00:11:46.652 "claimed": true, 00:11:46.652 "claim_type": "exclusive_write", 00:11:46.652 "zoned": false, 00:11:46.652 "supported_io_types": { 00:11:46.652 "read": true, 00:11:46.652 "write": true, 00:11:46.652 "unmap": true, 00:11:46.652 "flush": true, 00:11:46.652 "reset": true, 00:11:46.652 "nvme_admin": false, 00:11:46.652 "nvme_io": false, 00:11:46.652 "nvme_io_md": false, 00:11:46.652 "write_zeroes": true, 00:11:46.652 "zcopy": true, 00:11:46.652 "get_zone_info": false, 00:11:46.652 "zone_management": false, 00:11:46.652 "zone_append": false, 00:11:46.652 "compare": false, 00:11:46.652 "compare_and_write": false, 00:11:46.652 "abort": true, 00:11:46.652 "seek_hole": false, 00:11:46.652 "seek_data": false, 00:11:46.652 "copy": true, 00:11:46.652 "nvme_iov_md": false 00:11:46.652 }, 00:11:46.652 "memory_domains": [ 00:11:46.652 { 00:11:46.652 "dma_device_id": "system", 00:11:46.652 "dma_device_type": 1 00:11:46.652 }, 00:11:46.652 { 00:11:46.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.652 "dma_device_type": 2 00:11:46.652 } 00:11:46.652 ], 00:11:46.652 "driver_specific": {} 00:11:46.652 } 00:11:46.652 ] 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.652 "name": "Existed_Raid", 00:11:46.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.652 "strip_size_kb": 64, 00:11:46.652 "state": "configuring", 00:11:46.652 "raid_level": "concat", 00:11:46.652 "superblock": false, 00:11:46.652 "num_base_bdevs": 4, 00:11:46.652 "num_base_bdevs_discovered": 2, 00:11:46.652 "num_base_bdevs_operational": 4, 00:11:46.652 "base_bdevs_list": [ 00:11:46.652 { 00:11:46.652 "name": "BaseBdev1", 00:11:46.652 "uuid": "8fe0ba20-784c-405e-b29e-d6746d153e04", 00:11:46.652 "is_configured": true, 00:11:46.652 "data_offset": 0, 00:11:46.652 "data_size": 65536 00:11:46.652 }, 00:11:46.652 { 00:11:46.652 "name": "BaseBdev2", 00:11:46.652 "uuid": "dc0aa864-754b-4923-badf-ecff0c70b49a", 00:11:46.652 "is_configured": true, 00:11:46.652 "data_offset": 0, 00:11:46.652 "data_size": 65536 00:11:46.652 }, 00:11:46.652 { 00:11:46.652 "name": "BaseBdev3", 00:11:46.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.652 "is_configured": false, 00:11:46.652 "data_offset": 0, 00:11:46.652 "data_size": 0 00:11:46.652 }, 00:11:46.652 { 00:11:46.652 "name": "BaseBdev4", 00:11:46.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.652 "is_configured": false, 00:11:46.652 "data_offset": 0, 00:11:46.652 "data_size": 0 00:11:46.652 } 00:11:46.652 ] 00:11:46.652 }' 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.652 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.220 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:47.220 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.220 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.220 [2024-11-26 19:01:13.691424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:47.220 BaseBdev3 00:11:47.220 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.220 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:47.220 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:47.220 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:47.220 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:47.220 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:47.220 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:47.220 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:47.220 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.220 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.220 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.220 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:47.220 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.220 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.220 [ 00:11:47.220 { 00:11:47.220 "name": "BaseBdev3", 00:11:47.220 "aliases": [ 00:11:47.220 "07e7bf20-5e55-4efc-bcd9-8b87a6da7e52" 00:11:47.220 ], 00:11:47.220 "product_name": "Malloc disk", 00:11:47.220 "block_size": 512, 00:11:47.220 "num_blocks": 65536, 00:11:47.220 "uuid": "07e7bf20-5e55-4efc-bcd9-8b87a6da7e52", 00:11:47.220 "assigned_rate_limits": { 00:11:47.220 "rw_ios_per_sec": 0, 00:11:47.220 "rw_mbytes_per_sec": 0, 00:11:47.220 "r_mbytes_per_sec": 0, 00:11:47.220 "w_mbytes_per_sec": 0 00:11:47.220 }, 00:11:47.220 "claimed": true, 00:11:47.220 "claim_type": "exclusive_write", 00:11:47.220 "zoned": false, 00:11:47.220 "supported_io_types": { 00:11:47.220 "read": true, 00:11:47.220 "write": true, 00:11:47.220 "unmap": true, 00:11:47.220 "flush": true, 00:11:47.220 "reset": true, 00:11:47.220 "nvme_admin": false, 00:11:47.220 "nvme_io": false, 00:11:47.220 "nvme_io_md": false, 00:11:47.220 "write_zeroes": true, 00:11:47.220 "zcopy": true, 00:11:47.220 "get_zone_info": false, 00:11:47.220 "zone_management": false, 00:11:47.220 "zone_append": false, 00:11:47.220 "compare": false, 00:11:47.220 "compare_and_write": false, 00:11:47.220 "abort": true, 00:11:47.220 "seek_hole": false, 00:11:47.220 "seek_data": false, 00:11:47.220 "copy": true, 00:11:47.220 "nvme_iov_md": false 00:11:47.220 }, 00:11:47.220 "memory_domains": [ 00:11:47.220 { 00:11:47.220 "dma_device_id": "system", 00:11:47.220 "dma_device_type": 1 00:11:47.220 }, 00:11:47.220 { 00:11:47.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.220 "dma_device_type": 2 00:11:47.220 } 00:11:47.220 ], 00:11:47.220 "driver_specific": {} 00:11:47.220 } 00:11:47.220 ] 00:11:47.220 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.220 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:47.220 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:47.220 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:47.220 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:47.221 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.221 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.221 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.221 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.221 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.221 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.221 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.221 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.221 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.221 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.221 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.221 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.221 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.221 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.221 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.221 "name": "Existed_Raid", 00:11:47.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.221 "strip_size_kb": 64, 00:11:47.221 "state": "configuring", 00:11:47.221 "raid_level": "concat", 00:11:47.221 "superblock": false, 00:11:47.221 "num_base_bdevs": 4, 00:11:47.221 "num_base_bdevs_discovered": 3, 00:11:47.221 "num_base_bdevs_operational": 4, 00:11:47.221 "base_bdevs_list": [ 00:11:47.221 { 00:11:47.221 "name": "BaseBdev1", 00:11:47.221 "uuid": "8fe0ba20-784c-405e-b29e-d6746d153e04", 00:11:47.221 "is_configured": true, 00:11:47.221 "data_offset": 0, 00:11:47.221 "data_size": 65536 00:11:47.221 }, 00:11:47.221 { 00:11:47.221 "name": "BaseBdev2", 00:11:47.221 "uuid": "dc0aa864-754b-4923-badf-ecff0c70b49a", 00:11:47.221 "is_configured": true, 00:11:47.221 "data_offset": 0, 00:11:47.221 "data_size": 65536 00:11:47.221 }, 00:11:47.221 { 00:11:47.221 "name": "BaseBdev3", 00:11:47.221 "uuid": "07e7bf20-5e55-4efc-bcd9-8b87a6da7e52", 00:11:47.221 "is_configured": true, 00:11:47.221 "data_offset": 0, 00:11:47.221 "data_size": 65536 00:11:47.221 }, 00:11:47.221 { 00:11:47.221 "name": "BaseBdev4", 00:11:47.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.221 "is_configured": false, 00:11:47.221 "data_offset": 0, 00:11:47.221 "data_size": 0 00:11:47.221 } 00:11:47.221 ] 00:11:47.221 }' 00:11:47.221 19:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.221 19:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.788 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:47.788 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.788 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.788 [2024-11-26 19:01:14.286413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:47.788 [2024-11-26 19:01:14.286491] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:47.788 [2024-11-26 19:01:14.286504] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:47.788 [2024-11-26 19:01:14.286871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:47.789 [2024-11-26 19:01:14.287119] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:47.789 [2024-11-26 19:01:14.287144] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:47.789 [2024-11-26 19:01:14.287562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.789 BaseBdev4 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.789 [ 00:11:47.789 { 00:11:47.789 "name": "BaseBdev4", 00:11:47.789 "aliases": [ 00:11:47.789 "2c779edf-fe5b-464f-ba9e-50154ce4fc14" 00:11:47.789 ], 00:11:47.789 "product_name": "Malloc disk", 00:11:47.789 "block_size": 512, 00:11:47.789 "num_blocks": 65536, 00:11:47.789 "uuid": "2c779edf-fe5b-464f-ba9e-50154ce4fc14", 00:11:47.789 "assigned_rate_limits": { 00:11:47.789 "rw_ios_per_sec": 0, 00:11:47.789 "rw_mbytes_per_sec": 0, 00:11:47.789 "r_mbytes_per_sec": 0, 00:11:47.789 "w_mbytes_per_sec": 0 00:11:47.789 }, 00:11:47.789 "claimed": true, 00:11:47.789 "claim_type": "exclusive_write", 00:11:47.789 "zoned": false, 00:11:47.789 "supported_io_types": { 00:11:47.789 "read": true, 00:11:47.789 "write": true, 00:11:47.789 "unmap": true, 00:11:47.789 "flush": true, 00:11:47.789 "reset": true, 00:11:47.789 "nvme_admin": false, 00:11:47.789 "nvme_io": false, 00:11:47.789 "nvme_io_md": false, 00:11:47.789 "write_zeroes": true, 00:11:47.789 "zcopy": true, 00:11:47.789 "get_zone_info": false, 00:11:47.789 "zone_management": false, 00:11:47.789 "zone_append": false, 00:11:47.789 "compare": false, 00:11:47.789 "compare_and_write": false, 00:11:47.789 "abort": true, 00:11:47.789 "seek_hole": false, 00:11:47.789 "seek_data": false, 00:11:47.789 "copy": true, 00:11:47.789 "nvme_iov_md": false 00:11:47.789 }, 00:11:47.789 "memory_domains": [ 00:11:47.789 { 00:11:47.789 "dma_device_id": "system", 00:11:47.789 "dma_device_type": 1 00:11:47.789 }, 00:11:47.789 { 00:11:47.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.789 "dma_device_type": 2 00:11:47.789 } 00:11:47.789 ], 00:11:47.789 "driver_specific": {} 00:11:47.789 } 00:11:47.789 ] 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.789 "name": "Existed_Raid", 00:11:47.789 "uuid": "4eedd4d7-6044-4ef5-9f26-ecdaa0ec4fbf", 00:11:47.789 "strip_size_kb": 64, 00:11:47.789 "state": "online", 00:11:47.789 "raid_level": "concat", 00:11:47.789 "superblock": false, 00:11:47.789 "num_base_bdevs": 4, 00:11:47.789 "num_base_bdevs_discovered": 4, 00:11:47.789 "num_base_bdevs_operational": 4, 00:11:47.789 "base_bdevs_list": [ 00:11:47.789 { 00:11:47.789 "name": "BaseBdev1", 00:11:47.789 "uuid": "8fe0ba20-784c-405e-b29e-d6746d153e04", 00:11:47.789 "is_configured": true, 00:11:47.789 "data_offset": 0, 00:11:47.789 "data_size": 65536 00:11:47.789 }, 00:11:47.789 { 00:11:47.789 "name": "BaseBdev2", 00:11:47.789 "uuid": "dc0aa864-754b-4923-badf-ecff0c70b49a", 00:11:47.789 "is_configured": true, 00:11:47.789 "data_offset": 0, 00:11:47.789 "data_size": 65536 00:11:47.789 }, 00:11:47.789 { 00:11:47.789 "name": "BaseBdev3", 00:11:47.789 "uuid": "07e7bf20-5e55-4efc-bcd9-8b87a6da7e52", 00:11:47.789 "is_configured": true, 00:11:47.789 "data_offset": 0, 00:11:47.789 "data_size": 65536 00:11:47.789 }, 00:11:47.789 { 00:11:47.789 "name": "BaseBdev4", 00:11:47.789 "uuid": "2c779edf-fe5b-464f-ba9e-50154ce4fc14", 00:11:47.789 "is_configured": true, 00:11:47.789 "data_offset": 0, 00:11:47.789 "data_size": 65536 00:11:47.789 } 00:11:47.789 ] 00:11:47.789 }' 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.789 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.356 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:48.356 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:48.356 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:48.356 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:48.357 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:48.357 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:48.357 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:48.357 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:48.357 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.357 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.357 [2024-11-26 19:01:14.859155] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.357 19:01:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.357 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:48.357 "name": "Existed_Raid", 00:11:48.357 "aliases": [ 00:11:48.357 "4eedd4d7-6044-4ef5-9f26-ecdaa0ec4fbf" 00:11:48.357 ], 00:11:48.357 "product_name": "Raid Volume", 00:11:48.357 "block_size": 512, 00:11:48.357 "num_blocks": 262144, 00:11:48.357 "uuid": "4eedd4d7-6044-4ef5-9f26-ecdaa0ec4fbf", 00:11:48.357 "assigned_rate_limits": { 00:11:48.357 "rw_ios_per_sec": 0, 00:11:48.357 "rw_mbytes_per_sec": 0, 00:11:48.357 "r_mbytes_per_sec": 0, 00:11:48.357 "w_mbytes_per_sec": 0 00:11:48.357 }, 00:11:48.357 "claimed": false, 00:11:48.357 "zoned": false, 00:11:48.357 "supported_io_types": { 00:11:48.357 "read": true, 00:11:48.357 "write": true, 00:11:48.357 "unmap": true, 00:11:48.357 "flush": true, 00:11:48.357 "reset": true, 00:11:48.357 "nvme_admin": false, 00:11:48.357 "nvme_io": false, 00:11:48.357 "nvme_io_md": false, 00:11:48.357 "write_zeroes": true, 00:11:48.357 "zcopy": false, 00:11:48.357 "get_zone_info": false, 00:11:48.357 "zone_management": false, 00:11:48.357 "zone_append": false, 00:11:48.357 "compare": false, 00:11:48.357 "compare_and_write": false, 00:11:48.357 "abort": false, 00:11:48.357 "seek_hole": false, 00:11:48.357 "seek_data": false, 00:11:48.357 "copy": false, 00:11:48.357 "nvme_iov_md": false 00:11:48.357 }, 00:11:48.357 "memory_domains": [ 00:11:48.357 { 00:11:48.357 "dma_device_id": "system", 00:11:48.357 "dma_device_type": 1 00:11:48.357 }, 00:11:48.357 { 00:11:48.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.357 "dma_device_type": 2 00:11:48.357 }, 00:11:48.357 { 00:11:48.357 "dma_device_id": "system", 00:11:48.357 "dma_device_type": 1 00:11:48.357 }, 00:11:48.357 { 00:11:48.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.357 "dma_device_type": 2 00:11:48.357 }, 00:11:48.357 { 00:11:48.357 "dma_device_id": "system", 00:11:48.357 "dma_device_type": 1 00:11:48.357 }, 00:11:48.357 { 00:11:48.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.357 "dma_device_type": 2 00:11:48.357 }, 00:11:48.357 { 00:11:48.357 "dma_device_id": "system", 00:11:48.357 "dma_device_type": 1 00:11:48.357 }, 00:11:48.357 { 00:11:48.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.357 "dma_device_type": 2 00:11:48.357 } 00:11:48.357 ], 00:11:48.357 "driver_specific": { 00:11:48.357 "raid": { 00:11:48.357 "uuid": "4eedd4d7-6044-4ef5-9f26-ecdaa0ec4fbf", 00:11:48.357 "strip_size_kb": 64, 00:11:48.357 "state": "online", 00:11:48.357 "raid_level": "concat", 00:11:48.357 "superblock": false, 00:11:48.357 "num_base_bdevs": 4, 00:11:48.357 "num_base_bdevs_discovered": 4, 00:11:48.357 "num_base_bdevs_operational": 4, 00:11:48.357 "base_bdevs_list": [ 00:11:48.357 { 00:11:48.357 "name": "BaseBdev1", 00:11:48.357 "uuid": "8fe0ba20-784c-405e-b29e-d6746d153e04", 00:11:48.357 "is_configured": true, 00:11:48.357 "data_offset": 0, 00:11:48.357 "data_size": 65536 00:11:48.357 }, 00:11:48.357 { 00:11:48.357 "name": "BaseBdev2", 00:11:48.357 "uuid": "dc0aa864-754b-4923-badf-ecff0c70b49a", 00:11:48.357 "is_configured": true, 00:11:48.357 "data_offset": 0, 00:11:48.357 "data_size": 65536 00:11:48.357 }, 00:11:48.357 { 00:11:48.357 "name": "BaseBdev3", 00:11:48.357 "uuid": "07e7bf20-5e55-4efc-bcd9-8b87a6da7e52", 00:11:48.357 "is_configured": true, 00:11:48.357 "data_offset": 0, 00:11:48.357 "data_size": 65536 00:11:48.357 }, 00:11:48.357 { 00:11:48.357 "name": "BaseBdev4", 00:11:48.357 "uuid": "2c779edf-fe5b-464f-ba9e-50154ce4fc14", 00:11:48.357 "is_configured": true, 00:11:48.357 "data_offset": 0, 00:11:48.357 "data_size": 65536 00:11:48.357 } 00:11:48.357 ] 00:11:48.357 } 00:11:48.357 } 00:11:48.357 }' 00:11:48.357 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:48.357 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:48.357 BaseBdev2 00:11:48.357 BaseBdev3 00:11:48.357 BaseBdev4' 00:11:48.357 19:01:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.617 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.617 [2024-11-26 19:01:15.226793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:48.617 [2024-11-26 19:01:15.226835] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.617 [2024-11-26 19:01:15.226905] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.877 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.877 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:48.877 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:48.877 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:48.877 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:48.877 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:48.877 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:48.877 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.877 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:48.877 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:48.877 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.877 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.877 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.877 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.877 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.877 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.877 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.877 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.877 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.878 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.878 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.878 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.878 "name": "Existed_Raid", 00:11:48.878 "uuid": "4eedd4d7-6044-4ef5-9f26-ecdaa0ec4fbf", 00:11:48.878 "strip_size_kb": 64, 00:11:48.878 "state": "offline", 00:11:48.878 "raid_level": "concat", 00:11:48.878 "superblock": false, 00:11:48.878 "num_base_bdevs": 4, 00:11:48.878 "num_base_bdevs_discovered": 3, 00:11:48.878 "num_base_bdevs_operational": 3, 00:11:48.878 "base_bdevs_list": [ 00:11:48.878 { 00:11:48.878 "name": null, 00:11:48.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.878 "is_configured": false, 00:11:48.878 "data_offset": 0, 00:11:48.878 "data_size": 65536 00:11:48.878 }, 00:11:48.878 { 00:11:48.878 "name": "BaseBdev2", 00:11:48.878 "uuid": "dc0aa864-754b-4923-badf-ecff0c70b49a", 00:11:48.878 "is_configured": true, 00:11:48.878 "data_offset": 0, 00:11:48.878 "data_size": 65536 00:11:48.878 }, 00:11:48.878 { 00:11:48.878 "name": "BaseBdev3", 00:11:48.878 "uuid": "07e7bf20-5e55-4efc-bcd9-8b87a6da7e52", 00:11:48.878 "is_configured": true, 00:11:48.878 "data_offset": 0, 00:11:48.878 "data_size": 65536 00:11:48.878 }, 00:11:48.878 { 00:11:48.878 "name": "BaseBdev4", 00:11:48.878 "uuid": "2c779edf-fe5b-464f-ba9e-50154ce4fc14", 00:11:48.878 "is_configured": true, 00:11:48.878 "data_offset": 0, 00:11:48.878 "data_size": 65536 00:11:48.878 } 00:11:48.878 ] 00:11:48.878 }' 00:11:48.878 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.878 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.445 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:49.445 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:49.445 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.445 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:49.445 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.445 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.445 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.445 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:49.445 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:49.445 19:01:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:49.445 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.445 19:01:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.445 [2024-11-26 19:01:15.925993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:49.445 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.445 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:49.445 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:49.445 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.445 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:49.445 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.445 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.445 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.704 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:49.704 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:49.704 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:49.704 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.704 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.704 [2024-11-26 19:01:16.075821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:49.704 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.704 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:49.704 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:49.704 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:49.704 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.704 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.704 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.704 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.704 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:49.704 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:49.704 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:49.704 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.704 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.704 [2024-11-26 19:01:16.232914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:49.704 [2024-11-26 19:01:16.232981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:49.963 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.963 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:49.963 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:49.963 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.963 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:49.963 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.963 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.963 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.963 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:49.963 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:49.963 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:49.963 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:49.963 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:49.963 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:49.963 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.963 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.963 BaseBdev2 00:11:49.963 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.963 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:49.963 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.964 [ 00:11:49.964 { 00:11:49.964 "name": "BaseBdev2", 00:11:49.964 "aliases": [ 00:11:49.964 "c137679d-db96-4829-897b-d65430d2c80f" 00:11:49.964 ], 00:11:49.964 "product_name": "Malloc disk", 00:11:49.964 "block_size": 512, 00:11:49.964 "num_blocks": 65536, 00:11:49.964 "uuid": "c137679d-db96-4829-897b-d65430d2c80f", 00:11:49.964 "assigned_rate_limits": { 00:11:49.964 "rw_ios_per_sec": 0, 00:11:49.964 "rw_mbytes_per_sec": 0, 00:11:49.964 "r_mbytes_per_sec": 0, 00:11:49.964 "w_mbytes_per_sec": 0 00:11:49.964 }, 00:11:49.964 "claimed": false, 00:11:49.964 "zoned": false, 00:11:49.964 "supported_io_types": { 00:11:49.964 "read": true, 00:11:49.964 "write": true, 00:11:49.964 "unmap": true, 00:11:49.964 "flush": true, 00:11:49.964 "reset": true, 00:11:49.964 "nvme_admin": false, 00:11:49.964 "nvme_io": false, 00:11:49.964 "nvme_io_md": false, 00:11:49.964 "write_zeroes": true, 00:11:49.964 "zcopy": true, 00:11:49.964 "get_zone_info": false, 00:11:49.964 "zone_management": false, 00:11:49.964 "zone_append": false, 00:11:49.964 "compare": false, 00:11:49.964 "compare_and_write": false, 00:11:49.964 "abort": true, 00:11:49.964 "seek_hole": false, 00:11:49.964 "seek_data": false, 00:11:49.964 "copy": true, 00:11:49.964 "nvme_iov_md": false 00:11:49.964 }, 00:11:49.964 "memory_domains": [ 00:11:49.964 { 00:11:49.964 "dma_device_id": "system", 00:11:49.964 "dma_device_type": 1 00:11:49.964 }, 00:11:49.964 { 00:11:49.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.964 "dma_device_type": 2 00:11:49.964 } 00:11:49.964 ], 00:11:49.964 "driver_specific": {} 00:11:49.964 } 00:11:49.964 ] 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.964 BaseBdev3 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.964 [ 00:11:49.964 { 00:11:49.964 "name": "BaseBdev3", 00:11:49.964 "aliases": [ 00:11:49.964 "4d8749f2-b38d-4531-ae20-2d34584ae5e5" 00:11:49.964 ], 00:11:49.964 "product_name": "Malloc disk", 00:11:49.964 "block_size": 512, 00:11:49.964 "num_blocks": 65536, 00:11:49.964 "uuid": "4d8749f2-b38d-4531-ae20-2d34584ae5e5", 00:11:49.964 "assigned_rate_limits": { 00:11:49.964 "rw_ios_per_sec": 0, 00:11:49.964 "rw_mbytes_per_sec": 0, 00:11:49.964 "r_mbytes_per_sec": 0, 00:11:49.964 "w_mbytes_per_sec": 0 00:11:49.964 }, 00:11:49.964 "claimed": false, 00:11:49.964 "zoned": false, 00:11:49.964 "supported_io_types": { 00:11:49.964 "read": true, 00:11:49.964 "write": true, 00:11:49.964 "unmap": true, 00:11:49.964 "flush": true, 00:11:49.964 "reset": true, 00:11:49.964 "nvme_admin": false, 00:11:49.964 "nvme_io": false, 00:11:49.964 "nvme_io_md": false, 00:11:49.964 "write_zeroes": true, 00:11:49.964 "zcopy": true, 00:11:49.964 "get_zone_info": false, 00:11:49.964 "zone_management": false, 00:11:49.964 "zone_append": false, 00:11:49.964 "compare": false, 00:11:49.964 "compare_and_write": false, 00:11:49.964 "abort": true, 00:11:49.964 "seek_hole": false, 00:11:49.964 "seek_data": false, 00:11:49.964 "copy": true, 00:11:49.964 "nvme_iov_md": false 00:11:49.964 }, 00:11:49.964 "memory_domains": [ 00:11:49.964 { 00:11:49.964 "dma_device_id": "system", 00:11:49.964 "dma_device_type": 1 00:11:49.964 }, 00:11:49.964 { 00:11:49.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.964 "dma_device_type": 2 00:11:49.964 } 00:11:49.964 ], 00:11:49.964 "driver_specific": {} 00:11:49.964 } 00:11:49.964 ] 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.964 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.223 BaseBdev4 00:11:50.223 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.223 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:50.223 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:50.223 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:50.223 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:50.223 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:50.223 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:50.223 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:50.223 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.223 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.223 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.223 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:50.223 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.223 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.223 [ 00:11:50.223 { 00:11:50.223 "name": "BaseBdev4", 00:11:50.223 "aliases": [ 00:11:50.223 "1acadfdd-9c01-4f58-9874-061ca4ab8a24" 00:11:50.223 ], 00:11:50.223 "product_name": "Malloc disk", 00:11:50.223 "block_size": 512, 00:11:50.223 "num_blocks": 65536, 00:11:50.223 "uuid": "1acadfdd-9c01-4f58-9874-061ca4ab8a24", 00:11:50.223 "assigned_rate_limits": { 00:11:50.223 "rw_ios_per_sec": 0, 00:11:50.223 "rw_mbytes_per_sec": 0, 00:11:50.223 "r_mbytes_per_sec": 0, 00:11:50.223 "w_mbytes_per_sec": 0 00:11:50.223 }, 00:11:50.223 "claimed": false, 00:11:50.223 "zoned": false, 00:11:50.223 "supported_io_types": { 00:11:50.223 "read": true, 00:11:50.223 "write": true, 00:11:50.223 "unmap": true, 00:11:50.223 "flush": true, 00:11:50.223 "reset": true, 00:11:50.223 "nvme_admin": false, 00:11:50.223 "nvme_io": false, 00:11:50.223 "nvme_io_md": false, 00:11:50.223 "write_zeroes": true, 00:11:50.223 "zcopy": true, 00:11:50.223 "get_zone_info": false, 00:11:50.223 "zone_management": false, 00:11:50.223 "zone_append": false, 00:11:50.223 "compare": false, 00:11:50.223 "compare_and_write": false, 00:11:50.223 "abort": true, 00:11:50.223 "seek_hole": false, 00:11:50.224 "seek_data": false, 00:11:50.224 "copy": true, 00:11:50.224 "nvme_iov_md": false 00:11:50.224 }, 00:11:50.224 "memory_domains": [ 00:11:50.224 { 00:11:50.224 "dma_device_id": "system", 00:11:50.224 "dma_device_type": 1 00:11:50.224 }, 00:11:50.224 { 00:11:50.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.224 "dma_device_type": 2 00:11:50.224 } 00:11:50.224 ], 00:11:50.224 "driver_specific": {} 00:11:50.224 } 00:11:50.224 ] 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.224 [2024-11-26 19:01:16.631551] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:50.224 [2024-11-26 19:01:16.631625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:50.224 [2024-11-26 19:01:16.631675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:50.224 [2024-11-26 19:01:16.634428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:50.224 [2024-11-26 19:01:16.634529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.224 "name": "Existed_Raid", 00:11:50.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.224 "strip_size_kb": 64, 00:11:50.224 "state": "configuring", 00:11:50.224 "raid_level": "concat", 00:11:50.224 "superblock": false, 00:11:50.224 "num_base_bdevs": 4, 00:11:50.224 "num_base_bdevs_discovered": 3, 00:11:50.224 "num_base_bdevs_operational": 4, 00:11:50.224 "base_bdevs_list": [ 00:11:50.224 { 00:11:50.224 "name": "BaseBdev1", 00:11:50.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.224 "is_configured": false, 00:11:50.224 "data_offset": 0, 00:11:50.224 "data_size": 0 00:11:50.224 }, 00:11:50.224 { 00:11:50.224 "name": "BaseBdev2", 00:11:50.224 "uuid": "c137679d-db96-4829-897b-d65430d2c80f", 00:11:50.224 "is_configured": true, 00:11:50.224 "data_offset": 0, 00:11:50.224 "data_size": 65536 00:11:50.224 }, 00:11:50.224 { 00:11:50.224 "name": "BaseBdev3", 00:11:50.224 "uuid": "4d8749f2-b38d-4531-ae20-2d34584ae5e5", 00:11:50.224 "is_configured": true, 00:11:50.224 "data_offset": 0, 00:11:50.224 "data_size": 65536 00:11:50.224 }, 00:11:50.224 { 00:11:50.224 "name": "BaseBdev4", 00:11:50.224 "uuid": "1acadfdd-9c01-4f58-9874-061ca4ab8a24", 00:11:50.224 "is_configured": true, 00:11:50.224 "data_offset": 0, 00:11:50.224 "data_size": 65536 00:11:50.224 } 00:11:50.224 ] 00:11:50.224 }' 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.224 19:01:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.820 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:50.820 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.820 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.820 [2024-11-26 19:01:17.171765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:50.821 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.821 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:50.821 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.821 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.821 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.821 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.821 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.821 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.821 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.821 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.821 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.821 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.821 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.821 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.821 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.821 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.821 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.821 "name": "Existed_Raid", 00:11:50.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.821 "strip_size_kb": 64, 00:11:50.821 "state": "configuring", 00:11:50.821 "raid_level": "concat", 00:11:50.821 "superblock": false, 00:11:50.821 "num_base_bdevs": 4, 00:11:50.821 "num_base_bdevs_discovered": 2, 00:11:50.821 "num_base_bdevs_operational": 4, 00:11:50.821 "base_bdevs_list": [ 00:11:50.821 { 00:11:50.821 "name": "BaseBdev1", 00:11:50.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.821 "is_configured": false, 00:11:50.821 "data_offset": 0, 00:11:50.821 "data_size": 0 00:11:50.821 }, 00:11:50.821 { 00:11:50.821 "name": null, 00:11:50.821 "uuid": "c137679d-db96-4829-897b-d65430d2c80f", 00:11:50.821 "is_configured": false, 00:11:50.821 "data_offset": 0, 00:11:50.821 "data_size": 65536 00:11:50.821 }, 00:11:50.821 { 00:11:50.821 "name": "BaseBdev3", 00:11:50.821 "uuid": "4d8749f2-b38d-4531-ae20-2d34584ae5e5", 00:11:50.821 "is_configured": true, 00:11:50.821 "data_offset": 0, 00:11:50.821 "data_size": 65536 00:11:50.821 }, 00:11:50.821 { 00:11:50.821 "name": "BaseBdev4", 00:11:50.821 "uuid": "1acadfdd-9c01-4f58-9874-061ca4ab8a24", 00:11:50.821 "is_configured": true, 00:11:50.821 "data_offset": 0, 00:11:50.821 "data_size": 65536 00:11:50.821 } 00:11:50.821 ] 00:11:50.821 }' 00:11:50.821 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.821 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.386 [2024-11-26 19:01:17.808828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.386 BaseBdev1 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.386 [ 00:11:51.386 { 00:11:51.386 "name": "BaseBdev1", 00:11:51.386 "aliases": [ 00:11:51.386 "aa6eebaf-2f69-460a-8ccc-b1414695af80" 00:11:51.386 ], 00:11:51.386 "product_name": "Malloc disk", 00:11:51.386 "block_size": 512, 00:11:51.386 "num_blocks": 65536, 00:11:51.386 "uuid": "aa6eebaf-2f69-460a-8ccc-b1414695af80", 00:11:51.386 "assigned_rate_limits": { 00:11:51.386 "rw_ios_per_sec": 0, 00:11:51.386 "rw_mbytes_per_sec": 0, 00:11:51.386 "r_mbytes_per_sec": 0, 00:11:51.386 "w_mbytes_per_sec": 0 00:11:51.386 }, 00:11:51.386 "claimed": true, 00:11:51.386 "claim_type": "exclusive_write", 00:11:51.386 "zoned": false, 00:11:51.386 "supported_io_types": { 00:11:51.386 "read": true, 00:11:51.386 "write": true, 00:11:51.386 "unmap": true, 00:11:51.386 "flush": true, 00:11:51.386 "reset": true, 00:11:51.386 "nvme_admin": false, 00:11:51.386 "nvme_io": false, 00:11:51.386 "nvme_io_md": false, 00:11:51.386 "write_zeroes": true, 00:11:51.386 "zcopy": true, 00:11:51.386 "get_zone_info": false, 00:11:51.386 "zone_management": false, 00:11:51.386 "zone_append": false, 00:11:51.386 "compare": false, 00:11:51.386 "compare_and_write": false, 00:11:51.386 "abort": true, 00:11:51.386 "seek_hole": false, 00:11:51.386 "seek_data": false, 00:11:51.386 "copy": true, 00:11:51.386 "nvme_iov_md": false 00:11:51.386 }, 00:11:51.386 "memory_domains": [ 00:11:51.386 { 00:11:51.386 "dma_device_id": "system", 00:11:51.386 "dma_device_type": 1 00:11:51.386 }, 00:11:51.386 { 00:11:51.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.386 "dma_device_type": 2 00:11:51.386 } 00:11:51.386 ], 00:11:51.386 "driver_specific": {} 00:11:51.386 } 00:11:51.386 ] 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.386 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.387 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:51.387 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.387 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.387 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.387 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.387 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.387 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.387 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.387 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.387 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.387 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.387 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.387 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.387 "name": "Existed_Raid", 00:11:51.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.387 "strip_size_kb": 64, 00:11:51.387 "state": "configuring", 00:11:51.387 "raid_level": "concat", 00:11:51.387 "superblock": false, 00:11:51.387 "num_base_bdevs": 4, 00:11:51.387 "num_base_bdevs_discovered": 3, 00:11:51.387 "num_base_bdevs_operational": 4, 00:11:51.387 "base_bdevs_list": [ 00:11:51.387 { 00:11:51.387 "name": "BaseBdev1", 00:11:51.387 "uuid": "aa6eebaf-2f69-460a-8ccc-b1414695af80", 00:11:51.387 "is_configured": true, 00:11:51.387 "data_offset": 0, 00:11:51.387 "data_size": 65536 00:11:51.387 }, 00:11:51.387 { 00:11:51.387 "name": null, 00:11:51.387 "uuid": "c137679d-db96-4829-897b-d65430d2c80f", 00:11:51.387 "is_configured": false, 00:11:51.387 "data_offset": 0, 00:11:51.387 "data_size": 65536 00:11:51.387 }, 00:11:51.387 { 00:11:51.387 "name": "BaseBdev3", 00:11:51.387 "uuid": "4d8749f2-b38d-4531-ae20-2d34584ae5e5", 00:11:51.387 "is_configured": true, 00:11:51.387 "data_offset": 0, 00:11:51.387 "data_size": 65536 00:11:51.387 }, 00:11:51.387 { 00:11:51.387 "name": "BaseBdev4", 00:11:51.387 "uuid": "1acadfdd-9c01-4f58-9874-061ca4ab8a24", 00:11:51.387 "is_configured": true, 00:11:51.387 "data_offset": 0, 00:11:51.387 "data_size": 65536 00:11:51.387 } 00:11:51.387 ] 00:11:51.387 }' 00:11:51.387 19:01:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.387 19:01:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.953 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.953 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:51.953 19:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.953 19:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.953 19:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.953 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:51.953 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:51.953 19:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.953 19:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.953 [2024-11-26 19:01:18.405093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:51.953 19:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.953 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:51.953 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.953 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.953 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:51.953 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.953 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.953 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.954 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.954 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.954 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.954 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.954 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.954 19:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.954 19:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.954 19:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.954 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.954 "name": "Existed_Raid", 00:11:51.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.954 "strip_size_kb": 64, 00:11:51.954 "state": "configuring", 00:11:51.954 "raid_level": "concat", 00:11:51.954 "superblock": false, 00:11:51.954 "num_base_bdevs": 4, 00:11:51.954 "num_base_bdevs_discovered": 2, 00:11:51.954 "num_base_bdevs_operational": 4, 00:11:51.954 "base_bdevs_list": [ 00:11:51.954 { 00:11:51.954 "name": "BaseBdev1", 00:11:51.954 "uuid": "aa6eebaf-2f69-460a-8ccc-b1414695af80", 00:11:51.954 "is_configured": true, 00:11:51.954 "data_offset": 0, 00:11:51.954 "data_size": 65536 00:11:51.954 }, 00:11:51.954 { 00:11:51.954 "name": null, 00:11:51.954 "uuid": "c137679d-db96-4829-897b-d65430d2c80f", 00:11:51.954 "is_configured": false, 00:11:51.954 "data_offset": 0, 00:11:51.954 "data_size": 65536 00:11:51.954 }, 00:11:51.954 { 00:11:51.954 "name": null, 00:11:51.954 "uuid": "4d8749f2-b38d-4531-ae20-2d34584ae5e5", 00:11:51.954 "is_configured": false, 00:11:51.954 "data_offset": 0, 00:11:51.954 "data_size": 65536 00:11:51.954 }, 00:11:51.954 { 00:11:51.954 "name": "BaseBdev4", 00:11:51.954 "uuid": "1acadfdd-9c01-4f58-9874-061ca4ab8a24", 00:11:51.954 "is_configured": true, 00:11:51.954 "data_offset": 0, 00:11:51.954 "data_size": 65536 00:11:51.954 } 00:11:51.954 ] 00:11:51.954 }' 00:11:51.954 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.954 19:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.520 [2024-11-26 19:01:18.977254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.520 19:01:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.520 19:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.520 19:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.520 "name": "Existed_Raid", 00:11:52.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.520 "strip_size_kb": 64, 00:11:52.520 "state": "configuring", 00:11:52.520 "raid_level": "concat", 00:11:52.520 "superblock": false, 00:11:52.520 "num_base_bdevs": 4, 00:11:52.520 "num_base_bdevs_discovered": 3, 00:11:52.520 "num_base_bdevs_operational": 4, 00:11:52.520 "base_bdevs_list": [ 00:11:52.520 { 00:11:52.520 "name": "BaseBdev1", 00:11:52.520 "uuid": "aa6eebaf-2f69-460a-8ccc-b1414695af80", 00:11:52.520 "is_configured": true, 00:11:52.520 "data_offset": 0, 00:11:52.520 "data_size": 65536 00:11:52.520 }, 00:11:52.520 { 00:11:52.520 "name": null, 00:11:52.520 "uuid": "c137679d-db96-4829-897b-d65430d2c80f", 00:11:52.520 "is_configured": false, 00:11:52.520 "data_offset": 0, 00:11:52.520 "data_size": 65536 00:11:52.520 }, 00:11:52.520 { 00:11:52.520 "name": "BaseBdev3", 00:11:52.520 "uuid": "4d8749f2-b38d-4531-ae20-2d34584ae5e5", 00:11:52.520 "is_configured": true, 00:11:52.520 "data_offset": 0, 00:11:52.520 "data_size": 65536 00:11:52.520 }, 00:11:52.520 { 00:11:52.520 "name": "BaseBdev4", 00:11:52.520 "uuid": "1acadfdd-9c01-4f58-9874-061ca4ab8a24", 00:11:52.520 "is_configured": true, 00:11:52.520 "data_offset": 0, 00:11:52.520 "data_size": 65536 00:11:52.520 } 00:11:52.520 ] 00:11:52.520 }' 00:11:52.520 19:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.520 19:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.086 [2024-11-26 19:01:19.573508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.086 19:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.345 19:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.345 "name": "Existed_Raid", 00:11:53.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.345 "strip_size_kb": 64, 00:11:53.345 "state": "configuring", 00:11:53.345 "raid_level": "concat", 00:11:53.345 "superblock": false, 00:11:53.345 "num_base_bdevs": 4, 00:11:53.345 "num_base_bdevs_discovered": 2, 00:11:53.345 "num_base_bdevs_operational": 4, 00:11:53.345 "base_bdevs_list": [ 00:11:53.345 { 00:11:53.345 "name": null, 00:11:53.345 "uuid": "aa6eebaf-2f69-460a-8ccc-b1414695af80", 00:11:53.345 "is_configured": false, 00:11:53.345 "data_offset": 0, 00:11:53.345 "data_size": 65536 00:11:53.345 }, 00:11:53.345 { 00:11:53.345 "name": null, 00:11:53.345 "uuid": "c137679d-db96-4829-897b-d65430d2c80f", 00:11:53.345 "is_configured": false, 00:11:53.345 "data_offset": 0, 00:11:53.345 "data_size": 65536 00:11:53.345 }, 00:11:53.345 { 00:11:53.345 "name": "BaseBdev3", 00:11:53.345 "uuid": "4d8749f2-b38d-4531-ae20-2d34584ae5e5", 00:11:53.345 "is_configured": true, 00:11:53.345 "data_offset": 0, 00:11:53.345 "data_size": 65536 00:11:53.345 }, 00:11:53.345 { 00:11:53.345 "name": "BaseBdev4", 00:11:53.345 "uuid": "1acadfdd-9c01-4f58-9874-061ca4ab8a24", 00:11:53.345 "is_configured": true, 00:11:53.345 "data_offset": 0, 00:11:53.345 "data_size": 65536 00:11:53.345 } 00:11:53.345 ] 00:11:53.345 }' 00:11:53.345 19:01:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.345 19:01:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.603 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.603 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:53.603 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.603 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.861 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.861 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:53.861 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:53.861 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.861 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.861 [2024-11-26 19:01:20.270239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:53.861 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.861 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:53.862 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.862 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.862 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:53.862 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.862 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.862 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.862 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.862 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.862 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.862 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.862 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.862 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.862 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.862 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.862 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.862 "name": "Existed_Raid", 00:11:53.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.862 "strip_size_kb": 64, 00:11:53.862 "state": "configuring", 00:11:53.862 "raid_level": "concat", 00:11:53.862 "superblock": false, 00:11:53.862 "num_base_bdevs": 4, 00:11:53.862 "num_base_bdevs_discovered": 3, 00:11:53.862 "num_base_bdevs_operational": 4, 00:11:53.862 "base_bdevs_list": [ 00:11:53.862 { 00:11:53.862 "name": null, 00:11:53.862 "uuid": "aa6eebaf-2f69-460a-8ccc-b1414695af80", 00:11:53.862 "is_configured": false, 00:11:53.862 "data_offset": 0, 00:11:53.862 "data_size": 65536 00:11:53.862 }, 00:11:53.862 { 00:11:53.862 "name": "BaseBdev2", 00:11:53.862 "uuid": "c137679d-db96-4829-897b-d65430d2c80f", 00:11:53.862 "is_configured": true, 00:11:53.862 "data_offset": 0, 00:11:53.862 "data_size": 65536 00:11:53.862 }, 00:11:53.862 { 00:11:53.862 "name": "BaseBdev3", 00:11:53.862 "uuid": "4d8749f2-b38d-4531-ae20-2d34584ae5e5", 00:11:53.862 "is_configured": true, 00:11:53.862 "data_offset": 0, 00:11:53.862 "data_size": 65536 00:11:53.862 }, 00:11:53.862 { 00:11:53.862 "name": "BaseBdev4", 00:11:53.862 "uuid": "1acadfdd-9c01-4f58-9874-061ca4ab8a24", 00:11:53.862 "is_configured": true, 00:11:53.862 "data_offset": 0, 00:11:53.862 "data_size": 65536 00:11:53.862 } 00:11:53.862 ] 00:11:53.862 }' 00:11:53.862 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.862 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u aa6eebaf-2f69-460a-8ccc-b1414695af80 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.429 [2024-11-26 19:01:20.960848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:54.429 [2024-11-26 19:01:20.960918] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:54.429 [2024-11-26 19:01:20.960931] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:54.429 [2024-11-26 19:01:20.961341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:54.429 [2024-11-26 19:01:20.961534] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:54.429 [2024-11-26 19:01:20.961556] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:54.429 [2024-11-26 19:01:20.961865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.429 NewBaseBdev 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.429 [ 00:11:54.429 { 00:11:54.429 "name": "NewBaseBdev", 00:11:54.429 "aliases": [ 00:11:54.429 "aa6eebaf-2f69-460a-8ccc-b1414695af80" 00:11:54.429 ], 00:11:54.429 "product_name": "Malloc disk", 00:11:54.429 "block_size": 512, 00:11:54.429 "num_blocks": 65536, 00:11:54.429 "uuid": "aa6eebaf-2f69-460a-8ccc-b1414695af80", 00:11:54.429 "assigned_rate_limits": { 00:11:54.429 "rw_ios_per_sec": 0, 00:11:54.429 "rw_mbytes_per_sec": 0, 00:11:54.429 "r_mbytes_per_sec": 0, 00:11:54.429 "w_mbytes_per_sec": 0 00:11:54.429 }, 00:11:54.429 "claimed": true, 00:11:54.429 "claim_type": "exclusive_write", 00:11:54.429 "zoned": false, 00:11:54.429 "supported_io_types": { 00:11:54.429 "read": true, 00:11:54.429 "write": true, 00:11:54.429 "unmap": true, 00:11:54.429 "flush": true, 00:11:54.429 "reset": true, 00:11:54.429 "nvme_admin": false, 00:11:54.429 "nvme_io": false, 00:11:54.429 "nvme_io_md": false, 00:11:54.429 "write_zeroes": true, 00:11:54.429 "zcopy": true, 00:11:54.429 "get_zone_info": false, 00:11:54.429 "zone_management": false, 00:11:54.429 "zone_append": false, 00:11:54.429 "compare": false, 00:11:54.429 "compare_and_write": false, 00:11:54.429 "abort": true, 00:11:54.429 "seek_hole": false, 00:11:54.429 "seek_data": false, 00:11:54.429 "copy": true, 00:11:54.429 "nvme_iov_md": false 00:11:54.429 }, 00:11:54.429 "memory_domains": [ 00:11:54.429 { 00:11:54.429 "dma_device_id": "system", 00:11:54.429 "dma_device_type": 1 00:11:54.429 }, 00:11:54.429 { 00:11:54.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.429 "dma_device_type": 2 00:11:54.429 } 00:11:54.429 ], 00:11:54.429 "driver_specific": {} 00:11:54.429 } 00:11:54.429 ] 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.429 19:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.429 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.688 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.688 "name": "Existed_Raid", 00:11:54.688 "uuid": "8c024803-e69b-4aab-8dd7-9c2edd05fb15", 00:11:54.688 "strip_size_kb": 64, 00:11:54.688 "state": "online", 00:11:54.688 "raid_level": "concat", 00:11:54.688 "superblock": false, 00:11:54.688 "num_base_bdevs": 4, 00:11:54.688 "num_base_bdevs_discovered": 4, 00:11:54.688 "num_base_bdevs_operational": 4, 00:11:54.688 "base_bdevs_list": [ 00:11:54.688 { 00:11:54.688 "name": "NewBaseBdev", 00:11:54.688 "uuid": "aa6eebaf-2f69-460a-8ccc-b1414695af80", 00:11:54.688 "is_configured": true, 00:11:54.688 "data_offset": 0, 00:11:54.688 "data_size": 65536 00:11:54.688 }, 00:11:54.688 { 00:11:54.688 "name": "BaseBdev2", 00:11:54.688 "uuid": "c137679d-db96-4829-897b-d65430d2c80f", 00:11:54.688 "is_configured": true, 00:11:54.688 "data_offset": 0, 00:11:54.688 "data_size": 65536 00:11:54.688 }, 00:11:54.688 { 00:11:54.688 "name": "BaseBdev3", 00:11:54.688 "uuid": "4d8749f2-b38d-4531-ae20-2d34584ae5e5", 00:11:54.688 "is_configured": true, 00:11:54.688 "data_offset": 0, 00:11:54.688 "data_size": 65536 00:11:54.688 }, 00:11:54.688 { 00:11:54.688 "name": "BaseBdev4", 00:11:54.688 "uuid": "1acadfdd-9c01-4f58-9874-061ca4ab8a24", 00:11:54.688 "is_configured": true, 00:11:54.688 "data_offset": 0, 00:11:54.688 "data_size": 65536 00:11:54.688 } 00:11:54.688 ] 00:11:54.688 }' 00:11:54.688 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.688 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.947 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:54.947 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:54.947 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:54.947 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:54.947 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:54.947 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:54.947 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:54.947 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:54.947 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.947 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.947 [2024-11-26 19:01:21.537612] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:54.947 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:55.207 "name": "Existed_Raid", 00:11:55.207 "aliases": [ 00:11:55.207 "8c024803-e69b-4aab-8dd7-9c2edd05fb15" 00:11:55.207 ], 00:11:55.207 "product_name": "Raid Volume", 00:11:55.207 "block_size": 512, 00:11:55.207 "num_blocks": 262144, 00:11:55.207 "uuid": "8c024803-e69b-4aab-8dd7-9c2edd05fb15", 00:11:55.207 "assigned_rate_limits": { 00:11:55.207 "rw_ios_per_sec": 0, 00:11:55.207 "rw_mbytes_per_sec": 0, 00:11:55.207 "r_mbytes_per_sec": 0, 00:11:55.207 "w_mbytes_per_sec": 0 00:11:55.207 }, 00:11:55.207 "claimed": false, 00:11:55.207 "zoned": false, 00:11:55.207 "supported_io_types": { 00:11:55.207 "read": true, 00:11:55.207 "write": true, 00:11:55.207 "unmap": true, 00:11:55.207 "flush": true, 00:11:55.207 "reset": true, 00:11:55.207 "nvme_admin": false, 00:11:55.207 "nvme_io": false, 00:11:55.207 "nvme_io_md": false, 00:11:55.207 "write_zeroes": true, 00:11:55.207 "zcopy": false, 00:11:55.207 "get_zone_info": false, 00:11:55.207 "zone_management": false, 00:11:55.207 "zone_append": false, 00:11:55.207 "compare": false, 00:11:55.207 "compare_and_write": false, 00:11:55.207 "abort": false, 00:11:55.207 "seek_hole": false, 00:11:55.207 "seek_data": false, 00:11:55.207 "copy": false, 00:11:55.207 "nvme_iov_md": false 00:11:55.207 }, 00:11:55.207 "memory_domains": [ 00:11:55.207 { 00:11:55.207 "dma_device_id": "system", 00:11:55.207 "dma_device_type": 1 00:11:55.207 }, 00:11:55.207 { 00:11:55.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.207 "dma_device_type": 2 00:11:55.207 }, 00:11:55.207 { 00:11:55.207 "dma_device_id": "system", 00:11:55.207 "dma_device_type": 1 00:11:55.207 }, 00:11:55.207 { 00:11:55.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.207 "dma_device_type": 2 00:11:55.207 }, 00:11:55.207 { 00:11:55.207 "dma_device_id": "system", 00:11:55.207 "dma_device_type": 1 00:11:55.207 }, 00:11:55.207 { 00:11:55.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.207 "dma_device_type": 2 00:11:55.207 }, 00:11:55.207 { 00:11:55.207 "dma_device_id": "system", 00:11:55.207 "dma_device_type": 1 00:11:55.207 }, 00:11:55.207 { 00:11:55.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.207 "dma_device_type": 2 00:11:55.207 } 00:11:55.207 ], 00:11:55.207 "driver_specific": { 00:11:55.207 "raid": { 00:11:55.207 "uuid": "8c024803-e69b-4aab-8dd7-9c2edd05fb15", 00:11:55.207 "strip_size_kb": 64, 00:11:55.207 "state": "online", 00:11:55.207 "raid_level": "concat", 00:11:55.207 "superblock": false, 00:11:55.207 "num_base_bdevs": 4, 00:11:55.207 "num_base_bdevs_discovered": 4, 00:11:55.207 "num_base_bdevs_operational": 4, 00:11:55.207 "base_bdevs_list": [ 00:11:55.207 { 00:11:55.207 "name": "NewBaseBdev", 00:11:55.207 "uuid": "aa6eebaf-2f69-460a-8ccc-b1414695af80", 00:11:55.207 "is_configured": true, 00:11:55.207 "data_offset": 0, 00:11:55.207 "data_size": 65536 00:11:55.207 }, 00:11:55.207 { 00:11:55.207 "name": "BaseBdev2", 00:11:55.207 "uuid": "c137679d-db96-4829-897b-d65430d2c80f", 00:11:55.207 "is_configured": true, 00:11:55.207 "data_offset": 0, 00:11:55.207 "data_size": 65536 00:11:55.207 }, 00:11:55.207 { 00:11:55.207 "name": "BaseBdev3", 00:11:55.207 "uuid": "4d8749f2-b38d-4531-ae20-2d34584ae5e5", 00:11:55.207 "is_configured": true, 00:11:55.207 "data_offset": 0, 00:11:55.207 "data_size": 65536 00:11:55.207 }, 00:11:55.207 { 00:11:55.207 "name": "BaseBdev4", 00:11:55.207 "uuid": "1acadfdd-9c01-4f58-9874-061ca4ab8a24", 00:11:55.207 "is_configured": true, 00:11:55.207 "data_offset": 0, 00:11:55.207 "data_size": 65536 00:11:55.207 } 00:11:55.207 ] 00:11:55.207 } 00:11:55.207 } 00:11:55.207 }' 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:55.207 BaseBdev2 00:11:55.207 BaseBdev3 00:11:55.207 BaseBdev4' 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.207 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.466 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.467 [2024-11-26 19:01:21.913179] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:55.467 [2024-11-26 19:01:21.913224] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:55.467 [2024-11-26 19:01:21.913372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.467 [2024-11-26 19:01:21.913477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:55.467 [2024-11-26 19:01:21.913493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71769 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71769 ']' 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71769 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71769 00:11:55.467 killing process with pid 71769 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71769' 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71769 00:11:55.467 [2024-11-26 19:01:21.951621] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:55.467 19:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71769 00:11:55.725 [2024-11-26 19:01:22.334855] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:57.101 00:11:57.101 real 0m13.264s 00:11:57.101 user 0m21.800s 00:11:57.101 sys 0m1.903s 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.101 ************************************ 00:11:57.101 END TEST raid_state_function_test 00:11:57.101 ************************************ 00:11:57.101 19:01:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:57.101 19:01:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:57.101 19:01:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.101 19:01:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:57.101 ************************************ 00:11:57.101 START TEST raid_state_function_test_sb 00:11:57.101 ************************************ 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:57.101 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:57.102 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:57.102 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:57.102 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:57.102 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:57.102 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:57.102 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:57.102 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:57.102 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:57.102 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:57.102 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:57.102 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72458 00:11:57.102 Process raid pid: 72458 00:11:57.102 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:57.102 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72458' 00:11:57.102 19:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72458 00:11:57.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.102 19:01:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72458 ']' 00:11:57.102 19:01:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.102 19:01:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:57.102 19:01:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.102 19:01:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:57.102 19:01:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.102 [2024-11-26 19:01:23.687904] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:11:57.102 [2024-11-26 19:01:23.688071] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.359 [2024-11-26 19:01:23.864898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.617 [2024-11-26 19:01:24.036443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.875 [2024-11-26 19:01:24.294890] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.875 [2024-11-26 19:01:24.294942] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.134 19:01:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:58.134 19:01:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:58.134 19:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:58.134 19:01:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.134 19:01:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.134 [2024-11-26 19:01:24.708096] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:58.134 [2024-11-26 19:01:24.708167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:58.134 [2024-11-26 19:01:24.708186] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:58.134 [2024-11-26 19:01:24.708203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:58.134 [2024-11-26 19:01:24.708213] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:58.134 [2024-11-26 19:01:24.708227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:58.134 [2024-11-26 19:01:24.708237] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:58.134 [2024-11-26 19:01:24.708251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:58.134 19:01:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.134 19:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:58.134 19:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.134 19:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.134 19:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:58.134 19:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.134 19:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.134 19:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.134 19:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.134 19:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.134 19:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.134 19:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.134 19:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.134 19:01:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.134 19:01:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.134 19:01:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.392 19:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.392 "name": "Existed_Raid", 00:11:58.392 "uuid": "ddf5e488-5039-4dc2-a229-6ee282b313a5", 00:11:58.392 "strip_size_kb": 64, 00:11:58.392 "state": "configuring", 00:11:58.392 "raid_level": "concat", 00:11:58.392 "superblock": true, 00:11:58.392 "num_base_bdevs": 4, 00:11:58.392 "num_base_bdevs_discovered": 0, 00:11:58.392 "num_base_bdevs_operational": 4, 00:11:58.392 "base_bdevs_list": [ 00:11:58.392 { 00:11:58.392 "name": "BaseBdev1", 00:11:58.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.392 "is_configured": false, 00:11:58.392 "data_offset": 0, 00:11:58.392 "data_size": 0 00:11:58.392 }, 00:11:58.392 { 00:11:58.392 "name": "BaseBdev2", 00:11:58.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.392 "is_configured": false, 00:11:58.392 "data_offset": 0, 00:11:58.392 "data_size": 0 00:11:58.392 }, 00:11:58.392 { 00:11:58.392 "name": "BaseBdev3", 00:11:58.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.392 "is_configured": false, 00:11:58.392 "data_offset": 0, 00:11:58.392 "data_size": 0 00:11:58.392 }, 00:11:58.392 { 00:11:58.392 "name": "BaseBdev4", 00:11:58.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.392 "is_configured": false, 00:11:58.392 "data_offset": 0, 00:11:58.392 "data_size": 0 00:11:58.392 } 00:11:58.392 ] 00:11:58.392 }' 00:11:58.392 19:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.392 19:01:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.652 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:58.652 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.652 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.652 [2024-11-26 19:01:25.224125] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:58.652 [2024-11-26 19:01:25.224175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:58.652 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.652 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:58.652 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.652 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.652 [2024-11-26 19:01:25.232127] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:58.652 [2024-11-26 19:01:25.232193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:58.652 [2024-11-26 19:01:25.232209] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:58.652 [2024-11-26 19:01:25.232224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:58.652 [2024-11-26 19:01:25.232233] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:58.652 [2024-11-26 19:01:25.232257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:58.652 [2024-11-26 19:01:25.232265] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:58.652 [2024-11-26 19:01:25.232278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:58.652 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.652 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:58.652 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.652 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.912 [2024-11-26 19:01:25.283382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:58.912 BaseBdev1 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.912 [ 00:11:58.912 { 00:11:58.912 "name": "BaseBdev1", 00:11:58.912 "aliases": [ 00:11:58.912 "c0569e98-faf3-43b6-9133-701ea5afc44f" 00:11:58.912 ], 00:11:58.912 "product_name": "Malloc disk", 00:11:58.912 "block_size": 512, 00:11:58.912 "num_blocks": 65536, 00:11:58.912 "uuid": "c0569e98-faf3-43b6-9133-701ea5afc44f", 00:11:58.912 "assigned_rate_limits": { 00:11:58.912 "rw_ios_per_sec": 0, 00:11:58.912 "rw_mbytes_per_sec": 0, 00:11:58.912 "r_mbytes_per_sec": 0, 00:11:58.912 "w_mbytes_per_sec": 0 00:11:58.912 }, 00:11:58.912 "claimed": true, 00:11:58.912 "claim_type": "exclusive_write", 00:11:58.912 "zoned": false, 00:11:58.912 "supported_io_types": { 00:11:58.912 "read": true, 00:11:58.912 "write": true, 00:11:58.912 "unmap": true, 00:11:58.912 "flush": true, 00:11:58.912 "reset": true, 00:11:58.912 "nvme_admin": false, 00:11:58.912 "nvme_io": false, 00:11:58.912 "nvme_io_md": false, 00:11:58.912 "write_zeroes": true, 00:11:58.912 "zcopy": true, 00:11:58.912 "get_zone_info": false, 00:11:58.912 "zone_management": false, 00:11:58.912 "zone_append": false, 00:11:58.912 "compare": false, 00:11:58.912 "compare_and_write": false, 00:11:58.912 "abort": true, 00:11:58.912 "seek_hole": false, 00:11:58.912 "seek_data": false, 00:11:58.912 "copy": true, 00:11:58.912 "nvme_iov_md": false 00:11:58.912 }, 00:11:58.912 "memory_domains": [ 00:11:58.912 { 00:11:58.912 "dma_device_id": "system", 00:11:58.912 "dma_device_type": 1 00:11:58.912 }, 00:11:58.912 { 00:11:58.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.912 "dma_device_type": 2 00:11:58.912 } 00:11:58.912 ], 00:11:58.912 "driver_specific": {} 00:11:58.912 } 00:11:58.912 ] 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.912 "name": "Existed_Raid", 00:11:58.912 "uuid": "7b856f63-9652-463a-9bcf-d6e657892dfd", 00:11:58.912 "strip_size_kb": 64, 00:11:58.912 "state": "configuring", 00:11:58.912 "raid_level": "concat", 00:11:58.912 "superblock": true, 00:11:58.912 "num_base_bdevs": 4, 00:11:58.912 "num_base_bdevs_discovered": 1, 00:11:58.912 "num_base_bdevs_operational": 4, 00:11:58.912 "base_bdevs_list": [ 00:11:58.912 { 00:11:58.912 "name": "BaseBdev1", 00:11:58.912 "uuid": "c0569e98-faf3-43b6-9133-701ea5afc44f", 00:11:58.912 "is_configured": true, 00:11:58.912 "data_offset": 2048, 00:11:58.912 "data_size": 63488 00:11:58.912 }, 00:11:58.912 { 00:11:58.912 "name": "BaseBdev2", 00:11:58.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.912 "is_configured": false, 00:11:58.912 "data_offset": 0, 00:11:58.912 "data_size": 0 00:11:58.912 }, 00:11:58.912 { 00:11:58.912 "name": "BaseBdev3", 00:11:58.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.912 "is_configured": false, 00:11:58.912 "data_offset": 0, 00:11:58.912 "data_size": 0 00:11:58.912 }, 00:11:58.912 { 00:11:58.912 "name": "BaseBdev4", 00:11:58.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.912 "is_configured": false, 00:11:58.912 "data_offset": 0, 00:11:58.912 "data_size": 0 00:11:58.912 } 00:11:58.912 ] 00:11:58.912 }' 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.912 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.481 [2024-11-26 19:01:25.851640] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:59.481 [2024-11-26 19:01:25.851759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.481 [2024-11-26 19:01:25.863721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:59.481 [2024-11-26 19:01:25.866523] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:59.481 [2024-11-26 19:01:25.866595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:59.481 [2024-11-26 19:01:25.866613] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:59.481 [2024-11-26 19:01:25.866631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:59.481 [2024-11-26 19:01:25.866641] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:59.481 [2024-11-26 19:01:25.866669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.481 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.481 "name": "Existed_Raid", 00:11:59.481 "uuid": "116dc919-6011-49a0-8612-384a9dc6b5e8", 00:11:59.481 "strip_size_kb": 64, 00:11:59.481 "state": "configuring", 00:11:59.481 "raid_level": "concat", 00:11:59.481 "superblock": true, 00:11:59.481 "num_base_bdevs": 4, 00:11:59.481 "num_base_bdevs_discovered": 1, 00:11:59.481 "num_base_bdevs_operational": 4, 00:11:59.481 "base_bdevs_list": [ 00:11:59.481 { 00:11:59.481 "name": "BaseBdev1", 00:11:59.481 "uuid": "c0569e98-faf3-43b6-9133-701ea5afc44f", 00:11:59.481 "is_configured": true, 00:11:59.481 "data_offset": 2048, 00:11:59.481 "data_size": 63488 00:11:59.481 }, 00:11:59.481 { 00:11:59.481 "name": "BaseBdev2", 00:11:59.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.481 "is_configured": false, 00:11:59.481 "data_offset": 0, 00:11:59.481 "data_size": 0 00:11:59.481 }, 00:11:59.481 { 00:11:59.481 "name": "BaseBdev3", 00:11:59.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.481 "is_configured": false, 00:11:59.481 "data_offset": 0, 00:11:59.481 "data_size": 0 00:11:59.481 }, 00:11:59.481 { 00:11:59.481 "name": "BaseBdev4", 00:11:59.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.481 "is_configured": false, 00:11:59.481 "data_offset": 0, 00:11:59.481 "data_size": 0 00:11:59.481 } 00:11:59.481 ] 00:11:59.482 }' 00:11:59.482 19:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.482 19:01:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.049 19:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:00.049 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.049 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.049 [2024-11-26 19:01:26.417277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.049 BaseBdev2 00:12:00.049 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.049 19:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:00.049 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:00.049 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:00.049 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:00.049 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:00.049 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:00.049 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:00.049 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.049 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.049 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.049 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:00.049 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.049 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.049 [ 00:12:00.049 { 00:12:00.049 "name": "BaseBdev2", 00:12:00.049 "aliases": [ 00:12:00.049 "85c8954c-f9ac-4235-a0a8-e73089eb650d" 00:12:00.049 ], 00:12:00.049 "product_name": "Malloc disk", 00:12:00.049 "block_size": 512, 00:12:00.049 "num_blocks": 65536, 00:12:00.049 "uuid": "85c8954c-f9ac-4235-a0a8-e73089eb650d", 00:12:00.049 "assigned_rate_limits": { 00:12:00.049 "rw_ios_per_sec": 0, 00:12:00.049 "rw_mbytes_per_sec": 0, 00:12:00.049 "r_mbytes_per_sec": 0, 00:12:00.049 "w_mbytes_per_sec": 0 00:12:00.049 }, 00:12:00.049 "claimed": true, 00:12:00.049 "claim_type": "exclusive_write", 00:12:00.049 "zoned": false, 00:12:00.050 "supported_io_types": { 00:12:00.050 "read": true, 00:12:00.050 "write": true, 00:12:00.050 "unmap": true, 00:12:00.050 "flush": true, 00:12:00.050 "reset": true, 00:12:00.050 "nvme_admin": false, 00:12:00.050 "nvme_io": false, 00:12:00.050 "nvme_io_md": false, 00:12:00.050 "write_zeroes": true, 00:12:00.050 "zcopy": true, 00:12:00.050 "get_zone_info": false, 00:12:00.050 "zone_management": false, 00:12:00.050 "zone_append": false, 00:12:00.050 "compare": false, 00:12:00.050 "compare_and_write": false, 00:12:00.050 "abort": true, 00:12:00.050 "seek_hole": false, 00:12:00.050 "seek_data": false, 00:12:00.050 "copy": true, 00:12:00.050 "nvme_iov_md": false 00:12:00.050 }, 00:12:00.050 "memory_domains": [ 00:12:00.050 { 00:12:00.050 "dma_device_id": "system", 00:12:00.050 "dma_device_type": 1 00:12:00.050 }, 00:12:00.050 { 00:12:00.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.050 "dma_device_type": 2 00:12:00.050 } 00:12:00.050 ], 00:12:00.050 "driver_specific": {} 00:12:00.050 } 00:12:00.050 ] 00:12:00.050 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.050 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:00.050 19:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:00.050 19:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:00.050 19:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:00.050 19:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.050 19:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.050 19:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:00.050 19:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.050 19:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.050 19:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.050 19:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.050 19:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.050 19:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.050 19:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.050 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.050 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.050 19:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.050 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.050 19:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.050 "name": "Existed_Raid", 00:12:00.050 "uuid": "116dc919-6011-49a0-8612-384a9dc6b5e8", 00:12:00.050 "strip_size_kb": 64, 00:12:00.050 "state": "configuring", 00:12:00.050 "raid_level": "concat", 00:12:00.050 "superblock": true, 00:12:00.050 "num_base_bdevs": 4, 00:12:00.050 "num_base_bdevs_discovered": 2, 00:12:00.050 "num_base_bdevs_operational": 4, 00:12:00.050 "base_bdevs_list": [ 00:12:00.050 { 00:12:00.050 "name": "BaseBdev1", 00:12:00.050 "uuid": "c0569e98-faf3-43b6-9133-701ea5afc44f", 00:12:00.050 "is_configured": true, 00:12:00.050 "data_offset": 2048, 00:12:00.050 "data_size": 63488 00:12:00.050 }, 00:12:00.050 { 00:12:00.050 "name": "BaseBdev2", 00:12:00.050 "uuid": "85c8954c-f9ac-4235-a0a8-e73089eb650d", 00:12:00.050 "is_configured": true, 00:12:00.050 "data_offset": 2048, 00:12:00.050 "data_size": 63488 00:12:00.050 }, 00:12:00.050 { 00:12:00.050 "name": "BaseBdev3", 00:12:00.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.050 "is_configured": false, 00:12:00.050 "data_offset": 0, 00:12:00.050 "data_size": 0 00:12:00.050 }, 00:12:00.050 { 00:12:00.050 "name": "BaseBdev4", 00:12:00.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.050 "is_configured": false, 00:12:00.050 "data_offset": 0, 00:12:00.050 "data_size": 0 00:12:00.050 } 00:12:00.050 ] 00:12:00.050 }' 00:12:00.050 19:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.050 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.618 19:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:00.618 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.618 19:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.618 [2024-11-26 19:01:27.029125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.618 BaseBdev3 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.618 [ 00:12:00.618 { 00:12:00.618 "name": "BaseBdev3", 00:12:00.618 "aliases": [ 00:12:00.618 "6f743577-332d-4558-adba-298f6cd84705" 00:12:00.618 ], 00:12:00.618 "product_name": "Malloc disk", 00:12:00.618 "block_size": 512, 00:12:00.618 "num_blocks": 65536, 00:12:00.618 "uuid": "6f743577-332d-4558-adba-298f6cd84705", 00:12:00.618 "assigned_rate_limits": { 00:12:00.618 "rw_ios_per_sec": 0, 00:12:00.618 "rw_mbytes_per_sec": 0, 00:12:00.618 "r_mbytes_per_sec": 0, 00:12:00.618 "w_mbytes_per_sec": 0 00:12:00.618 }, 00:12:00.618 "claimed": true, 00:12:00.618 "claim_type": "exclusive_write", 00:12:00.618 "zoned": false, 00:12:00.618 "supported_io_types": { 00:12:00.618 "read": true, 00:12:00.618 "write": true, 00:12:00.618 "unmap": true, 00:12:00.618 "flush": true, 00:12:00.618 "reset": true, 00:12:00.618 "nvme_admin": false, 00:12:00.618 "nvme_io": false, 00:12:00.618 "nvme_io_md": false, 00:12:00.618 "write_zeroes": true, 00:12:00.618 "zcopy": true, 00:12:00.618 "get_zone_info": false, 00:12:00.618 "zone_management": false, 00:12:00.618 "zone_append": false, 00:12:00.618 "compare": false, 00:12:00.618 "compare_and_write": false, 00:12:00.618 "abort": true, 00:12:00.618 "seek_hole": false, 00:12:00.618 "seek_data": false, 00:12:00.618 "copy": true, 00:12:00.618 "nvme_iov_md": false 00:12:00.618 }, 00:12:00.618 "memory_domains": [ 00:12:00.618 { 00:12:00.618 "dma_device_id": "system", 00:12:00.618 "dma_device_type": 1 00:12:00.618 }, 00:12:00.618 { 00:12:00.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.618 "dma_device_type": 2 00:12:00.618 } 00:12:00.618 ], 00:12:00.618 "driver_specific": {} 00:12:00.618 } 00:12:00.618 ] 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.618 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.618 "name": "Existed_Raid", 00:12:00.618 "uuid": "116dc919-6011-49a0-8612-384a9dc6b5e8", 00:12:00.618 "strip_size_kb": 64, 00:12:00.618 "state": "configuring", 00:12:00.618 "raid_level": "concat", 00:12:00.618 "superblock": true, 00:12:00.618 "num_base_bdevs": 4, 00:12:00.618 "num_base_bdevs_discovered": 3, 00:12:00.618 "num_base_bdevs_operational": 4, 00:12:00.618 "base_bdevs_list": [ 00:12:00.618 { 00:12:00.618 "name": "BaseBdev1", 00:12:00.618 "uuid": "c0569e98-faf3-43b6-9133-701ea5afc44f", 00:12:00.618 "is_configured": true, 00:12:00.618 "data_offset": 2048, 00:12:00.618 "data_size": 63488 00:12:00.618 }, 00:12:00.619 { 00:12:00.619 "name": "BaseBdev2", 00:12:00.619 "uuid": "85c8954c-f9ac-4235-a0a8-e73089eb650d", 00:12:00.619 "is_configured": true, 00:12:00.619 "data_offset": 2048, 00:12:00.619 "data_size": 63488 00:12:00.619 }, 00:12:00.619 { 00:12:00.619 "name": "BaseBdev3", 00:12:00.619 "uuid": "6f743577-332d-4558-adba-298f6cd84705", 00:12:00.619 "is_configured": true, 00:12:00.619 "data_offset": 2048, 00:12:00.619 "data_size": 63488 00:12:00.619 }, 00:12:00.619 { 00:12:00.619 "name": "BaseBdev4", 00:12:00.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.619 "is_configured": false, 00:12:00.619 "data_offset": 0, 00:12:00.619 "data_size": 0 00:12:00.619 } 00:12:00.619 ] 00:12:00.619 }' 00:12:00.619 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.619 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.187 [2024-11-26 19:01:27.647699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:01.187 [2024-11-26 19:01:27.648102] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:01.187 [2024-11-26 19:01:27.648128] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:01.187 BaseBdev4 00:12:01.187 [2024-11-26 19:01:27.648509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:01.187 [2024-11-26 19:01:27.648711] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:01.187 [2024-11-26 19:01:27.648733] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:01.187 [2024-11-26 19:01:27.648946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.187 [ 00:12:01.187 { 00:12:01.187 "name": "BaseBdev4", 00:12:01.187 "aliases": [ 00:12:01.187 "c9ac64f0-93ec-43e6-b5dc-1c14f6fb4724" 00:12:01.187 ], 00:12:01.187 "product_name": "Malloc disk", 00:12:01.187 "block_size": 512, 00:12:01.187 "num_blocks": 65536, 00:12:01.187 "uuid": "c9ac64f0-93ec-43e6-b5dc-1c14f6fb4724", 00:12:01.187 "assigned_rate_limits": { 00:12:01.187 "rw_ios_per_sec": 0, 00:12:01.187 "rw_mbytes_per_sec": 0, 00:12:01.187 "r_mbytes_per_sec": 0, 00:12:01.187 "w_mbytes_per_sec": 0 00:12:01.187 }, 00:12:01.187 "claimed": true, 00:12:01.187 "claim_type": "exclusive_write", 00:12:01.187 "zoned": false, 00:12:01.187 "supported_io_types": { 00:12:01.187 "read": true, 00:12:01.187 "write": true, 00:12:01.187 "unmap": true, 00:12:01.187 "flush": true, 00:12:01.187 "reset": true, 00:12:01.187 "nvme_admin": false, 00:12:01.187 "nvme_io": false, 00:12:01.187 "nvme_io_md": false, 00:12:01.187 "write_zeroes": true, 00:12:01.187 "zcopy": true, 00:12:01.187 "get_zone_info": false, 00:12:01.187 "zone_management": false, 00:12:01.187 "zone_append": false, 00:12:01.187 "compare": false, 00:12:01.187 "compare_and_write": false, 00:12:01.187 "abort": true, 00:12:01.187 "seek_hole": false, 00:12:01.187 "seek_data": false, 00:12:01.187 "copy": true, 00:12:01.187 "nvme_iov_md": false 00:12:01.187 }, 00:12:01.187 "memory_domains": [ 00:12:01.187 { 00:12:01.187 "dma_device_id": "system", 00:12:01.187 "dma_device_type": 1 00:12:01.187 }, 00:12:01.187 { 00:12:01.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.187 "dma_device_type": 2 00:12:01.187 } 00:12:01.187 ], 00:12:01.187 "driver_specific": {} 00:12:01.187 } 00:12:01.187 ] 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.187 "name": "Existed_Raid", 00:12:01.187 "uuid": "116dc919-6011-49a0-8612-384a9dc6b5e8", 00:12:01.187 "strip_size_kb": 64, 00:12:01.187 "state": "online", 00:12:01.187 "raid_level": "concat", 00:12:01.187 "superblock": true, 00:12:01.187 "num_base_bdevs": 4, 00:12:01.187 "num_base_bdevs_discovered": 4, 00:12:01.187 "num_base_bdevs_operational": 4, 00:12:01.187 "base_bdevs_list": [ 00:12:01.187 { 00:12:01.187 "name": "BaseBdev1", 00:12:01.187 "uuid": "c0569e98-faf3-43b6-9133-701ea5afc44f", 00:12:01.187 "is_configured": true, 00:12:01.187 "data_offset": 2048, 00:12:01.187 "data_size": 63488 00:12:01.187 }, 00:12:01.187 { 00:12:01.187 "name": "BaseBdev2", 00:12:01.187 "uuid": "85c8954c-f9ac-4235-a0a8-e73089eb650d", 00:12:01.187 "is_configured": true, 00:12:01.187 "data_offset": 2048, 00:12:01.187 "data_size": 63488 00:12:01.187 }, 00:12:01.187 { 00:12:01.187 "name": "BaseBdev3", 00:12:01.187 "uuid": "6f743577-332d-4558-adba-298f6cd84705", 00:12:01.187 "is_configured": true, 00:12:01.187 "data_offset": 2048, 00:12:01.187 "data_size": 63488 00:12:01.187 }, 00:12:01.187 { 00:12:01.187 "name": "BaseBdev4", 00:12:01.187 "uuid": "c9ac64f0-93ec-43e6-b5dc-1c14f6fb4724", 00:12:01.187 "is_configured": true, 00:12:01.187 "data_offset": 2048, 00:12:01.187 "data_size": 63488 00:12:01.187 } 00:12:01.187 ] 00:12:01.187 }' 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.187 19:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.755 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:01.755 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:01.755 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:01.755 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:01.755 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:01.755 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:01.755 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:01.755 19:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.755 19:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.755 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:01.755 [2024-11-26 19:01:28.260501] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:01.755 19:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.755 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:01.755 "name": "Existed_Raid", 00:12:01.755 "aliases": [ 00:12:01.755 "116dc919-6011-49a0-8612-384a9dc6b5e8" 00:12:01.755 ], 00:12:01.755 "product_name": "Raid Volume", 00:12:01.755 "block_size": 512, 00:12:01.755 "num_blocks": 253952, 00:12:01.755 "uuid": "116dc919-6011-49a0-8612-384a9dc6b5e8", 00:12:01.755 "assigned_rate_limits": { 00:12:01.755 "rw_ios_per_sec": 0, 00:12:01.755 "rw_mbytes_per_sec": 0, 00:12:01.755 "r_mbytes_per_sec": 0, 00:12:01.755 "w_mbytes_per_sec": 0 00:12:01.755 }, 00:12:01.755 "claimed": false, 00:12:01.755 "zoned": false, 00:12:01.755 "supported_io_types": { 00:12:01.755 "read": true, 00:12:01.755 "write": true, 00:12:01.755 "unmap": true, 00:12:01.755 "flush": true, 00:12:01.755 "reset": true, 00:12:01.755 "nvme_admin": false, 00:12:01.755 "nvme_io": false, 00:12:01.755 "nvme_io_md": false, 00:12:01.755 "write_zeroes": true, 00:12:01.755 "zcopy": false, 00:12:01.755 "get_zone_info": false, 00:12:01.755 "zone_management": false, 00:12:01.755 "zone_append": false, 00:12:01.755 "compare": false, 00:12:01.755 "compare_and_write": false, 00:12:01.755 "abort": false, 00:12:01.755 "seek_hole": false, 00:12:01.755 "seek_data": false, 00:12:01.755 "copy": false, 00:12:01.755 "nvme_iov_md": false 00:12:01.755 }, 00:12:01.755 "memory_domains": [ 00:12:01.755 { 00:12:01.755 "dma_device_id": "system", 00:12:01.755 "dma_device_type": 1 00:12:01.755 }, 00:12:01.755 { 00:12:01.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.755 "dma_device_type": 2 00:12:01.755 }, 00:12:01.755 { 00:12:01.755 "dma_device_id": "system", 00:12:01.755 "dma_device_type": 1 00:12:01.755 }, 00:12:01.755 { 00:12:01.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.755 "dma_device_type": 2 00:12:01.755 }, 00:12:01.755 { 00:12:01.755 "dma_device_id": "system", 00:12:01.755 "dma_device_type": 1 00:12:01.755 }, 00:12:01.755 { 00:12:01.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.755 "dma_device_type": 2 00:12:01.755 }, 00:12:01.755 { 00:12:01.755 "dma_device_id": "system", 00:12:01.755 "dma_device_type": 1 00:12:01.755 }, 00:12:01.755 { 00:12:01.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.755 "dma_device_type": 2 00:12:01.755 } 00:12:01.755 ], 00:12:01.755 "driver_specific": { 00:12:01.755 "raid": { 00:12:01.755 "uuid": "116dc919-6011-49a0-8612-384a9dc6b5e8", 00:12:01.755 "strip_size_kb": 64, 00:12:01.755 "state": "online", 00:12:01.755 "raid_level": "concat", 00:12:01.755 "superblock": true, 00:12:01.755 "num_base_bdevs": 4, 00:12:01.755 "num_base_bdevs_discovered": 4, 00:12:01.755 "num_base_bdevs_operational": 4, 00:12:01.755 "base_bdevs_list": [ 00:12:01.755 { 00:12:01.755 "name": "BaseBdev1", 00:12:01.755 "uuid": "c0569e98-faf3-43b6-9133-701ea5afc44f", 00:12:01.755 "is_configured": true, 00:12:01.755 "data_offset": 2048, 00:12:01.755 "data_size": 63488 00:12:01.755 }, 00:12:01.755 { 00:12:01.755 "name": "BaseBdev2", 00:12:01.755 "uuid": "85c8954c-f9ac-4235-a0a8-e73089eb650d", 00:12:01.755 "is_configured": true, 00:12:01.755 "data_offset": 2048, 00:12:01.755 "data_size": 63488 00:12:01.755 }, 00:12:01.755 { 00:12:01.755 "name": "BaseBdev3", 00:12:01.755 "uuid": "6f743577-332d-4558-adba-298f6cd84705", 00:12:01.755 "is_configured": true, 00:12:01.755 "data_offset": 2048, 00:12:01.755 "data_size": 63488 00:12:01.755 }, 00:12:01.755 { 00:12:01.755 "name": "BaseBdev4", 00:12:01.755 "uuid": "c9ac64f0-93ec-43e6-b5dc-1c14f6fb4724", 00:12:01.755 "is_configured": true, 00:12:01.755 "data_offset": 2048, 00:12:01.755 "data_size": 63488 00:12:01.755 } 00:12:01.755 ] 00:12:01.755 } 00:12:01.755 } 00:12:01.755 }' 00:12:01.756 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:01.756 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:01.756 BaseBdev2 00:12:01.756 BaseBdev3 00:12:01.756 BaseBdev4' 00:12:01.756 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.014 19:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.273 [2024-11-26 19:01:28.652218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:02.273 [2024-11-26 19:01:28.652264] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:02.273 [2024-11-26 19:01:28.652363] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.273 "name": "Existed_Raid", 00:12:02.273 "uuid": "116dc919-6011-49a0-8612-384a9dc6b5e8", 00:12:02.273 "strip_size_kb": 64, 00:12:02.273 "state": "offline", 00:12:02.273 "raid_level": "concat", 00:12:02.273 "superblock": true, 00:12:02.273 "num_base_bdevs": 4, 00:12:02.273 "num_base_bdevs_discovered": 3, 00:12:02.273 "num_base_bdevs_operational": 3, 00:12:02.273 "base_bdevs_list": [ 00:12:02.273 { 00:12:02.273 "name": null, 00:12:02.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.273 "is_configured": false, 00:12:02.273 "data_offset": 0, 00:12:02.273 "data_size": 63488 00:12:02.273 }, 00:12:02.273 { 00:12:02.273 "name": "BaseBdev2", 00:12:02.273 "uuid": "85c8954c-f9ac-4235-a0a8-e73089eb650d", 00:12:02.273 "is_configured": true, 00:12:02.273 "data_offset": 2048, 00:12:02.273 "data_size": 63488 00:12:02.273 }, 00:12:02.273 { 00:12:02.273 "name": "BaseBdev3", 00:12:02.273 "uuid": "6f743577-332d-4558-adba-298f6cd84705", 00:12:02.273 "is_configured": true, 00:12:02.273 "data_offset": 2048, 00:12:02.273 "data_size": 63488 00:12:02.273 }, 00:12:02.273 { 00:12:02.273 "name": "BaseBdev4", 00:12:02.273 "uuid": "c9ac64f0-93ec-43e6-b5dc-1c14f6fb4724", 00:12:02.273 "is_configured": true, 00:12:02.273 "data_offset": 2048, 00:12:02.273 "data_size": 63488 00:12:02.273 } 00:12:02.273 ] 00:12:02.273 }' 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.273 19:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.840 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:02.840 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:02.840 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.840 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.840 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:02.840 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.840 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.840 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:02.840 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:02.840 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:02.840 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.840 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.840 [2024-11-26 19:01:29.313362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:02.841 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.841 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:02.841 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:02.841 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.841 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:02.841 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.841 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.841 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.099 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:03.099 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:03.099 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:03.099 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.099 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.099 [2024-11-26 19:01:29.468233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:03.099 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.099 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:03.099 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:03.099 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.099 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.099 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:03.099 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.099 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.099 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:03.099 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:03.099 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:03.099 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.099 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.099 [2024-11-26 19:01:29.624367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:03.099 [2024-11-26 19:01:29.624481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:03.358 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.358 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:03.358 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.359 BaseBdev2 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.359 [ 00:12:03.359 { 00:12:03.359 "name": "BaseBdev2", 00:12:03.359 "aliases": [ 00:12:03.359 "55721de7-7b93-4c96-bd1e-ef64acb43b05" 00:12:03.359 ], 00:12:03.359 "product_name": "Malloc disk", 00:12:03.359 "block_size": 512, 00:12:03.359 "num_blocks": 65536, 00:12:03.359 "uuid": "55721de7-7b93-4c96-bd1e-ef64acb43b05", 00:12:03.359 "assigned_rate_limits": { 00:12:03.359 "rw_ios_per_sec": 0, 00:12:03.359 "rw_mbytes_per_sec": 0, 00:12:03.359 "r_mbytes_per_sec": 0, 00:12:03.359 "w_mbytes_per_sec": 0 00:12:03.359 }, 00:12:03.359 "claimed": false, 00:12:03.359 "zoned": false, 00:12:03.359 "supported_io_types": { 00:12:03.359 "read": true, 00:12:03.359 "write": true, 00:12:03.359 "unmap": true, 00:12:03.359 "flush": true, 00:12:03.359 "reset": true, 00:12:03.359 "nvme_admin": false, 00:12:03.359 "nvme_io": false, 00:12:03.359 "nvme_io_md": false, 00:12:03.359 "write_zeroes": true, 00:12:03.359 "zcopy": true, 00:12:03.359 "get_zone_info": false, 00:12:03.359 "zone_management": false, 00:12:03.359 "zone_append": false, 00:12:03.359 "compare": false, 00:12:03.359 "compare_and_write": false, 00:12:03.359 "abort": true, 00:12:03.359 "seek_hole": false, 00:12:03.359 "seek_data": false, 00:12:03.359 "copy": true, 00:12:03.359 "nvme_iov_md": false 00:12:03.359 }, 00:12:03.359 "memory_domains": [ 00:12:03.359 { 00:12:03.359 "dma_device_id": "system", 00:12:03.359 "dma_device_type": 1 00:12:03.359 }, 00:12:03.359 { 00:12:03.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.359 "dma_device_type": 2 00:12:03.359 } 00:12:03.359 ], 00:12:03.359 "driver_specific": {} 00:12:03.359 } 00:12:03.359 ] 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.359 BaseBdev3 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.359 [ 00:12:03.359 { 00:12:03.359 "name": "BaseBdev3", 00:12:03.359 "aliases": [ 00:12:03.359 "c8095ef0-8aa5-482a-8c2c-4922efd8302b" 00:12:03.359 ], 00:12:03.359 "product_name": "Malloc disk", 00:12:03.359 "block_size": 512, 00:12:03.359 "num_blocks": 65536, 00:12:03.359 "uuid": "c8095ef0-8aa5-482a-8c2c-4922efd8302b", 00:12:03.359 "assigned_rate_limits": { 00:12:03.359 "rw_ios_per_sec": 0, 00:12:03.359 "rw_mbytes_per_sec": 0, 00:12:03.359 "r_mbytes_per_sec": 0, 00:12:03.359 "w_mbytes_per_sec": 0 00:12:03.359 }, 00:12:03.359 "claimed": false, 00:12:03.359 "zoned": false, 00:12:03.359 "supported_io_types": { 00:12:03.359 "read": true, 00:12:03.359 "write": true, 00:12:03.359 "unmap": true, 00:12:03.359 "flush": true, 00:12:03.359 "reset": true, 00:12:03.359 "nvme_admin": false, 00:12:03.359 "nvme_io": false, 00:12:03.359 "nvme_io_md": false, 00:12:03.359 "write_zeroes": true, 00:12:03.359 "zcopy": true, 00:12:03.359 "get_zone_info": false, 00:12:03.359 "zone_management": false, 00:12:03.359 "zone_append": false, 00:12:03.359 "compare": false, 00:12:03.359 "compare_and_write": false, 00:12:03.359 "abort": true, 00:12:03.359 "seek_hole": false, 00:12:03.359 "seek_data": false, 00:12:03.359 "copy": true, 00:12:03.359 "nvme_iov_md": false 00:12:03.359 }, 00:12:03.359 "memory_domains": [ 00:12:03.359 { 00:12:03.359 "dma_device_id": "system", 00:12:03.359 "dma_device_type": 1 00:12:03.359 }, 00:12:03.359 { 00:12:03.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.359 "dma_device_type": 2 00:12:03.359 } 00:12:03.359 ], 00:12:03.359 "driver_specific": {} 00:12:03.359 } 00:12:03.359 ] 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.359 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.619 BaseBdev4 00:12:03.619 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.619 19:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:03.619 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:03.619 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:03.619 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:03.619 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:03.619 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:03.619 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:03.619 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.619 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.619 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.619 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:03.619 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.619 19:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.619 [ 00:12:03.619 { 00:12:03.619 "name": "BaseBdev4", 00:12:03.619 "aliases": [ 00:12:03.619 "4ec9917c-21cb-470c-8968-4e575fb46a1a" 00:12:03.619 ], 00:12:03.619 "product_name": "Malloc disk", 00:12:03.619 "block_size": 512, 00:12:03.619 "num_blocks": 65536, 00:12:03.619 "uuid": "4ec9917c-21cb-470c-8968-4e575fb46a1a", 00:12:03.619 "assigned_rate_limits": { 00:12:03.619 "rw_ios_per_sec": 0, 00:12:03.619 "rw_mbytes_per_sec": 0, 00:12:03.619 "r_mbytes_per_sec": 0, 00:12:03.619 "w_mbytes_per_sec": 0 00:12:03.619 }, 00:12:03.619 "claimed": false, 00:12:03.619 "zoned": false, 00:12:03.619 "supported_io_types": { 00:12:03.619 "read": true, 00:12:03.619 "write": true, 00:12:03.619 "unmap": true, 00:12:03.619 "flush": true, 00:12:03.619 "reset": true, 00:12:03.619 "nvme_admin": false, 00:12:03.619 "nvme_io": false, 00:12:03.619 "nvme_io_md": false, 00:12:03.619 "write_zeroes": true, 00:12:03.619 "zcopy": true, 00:12:03.619 "get_zone_info": false, 00:12:03.619 "zone_management": false, 00:12:03.619 "zone_append": false, 00:12:03.619 "compare": false, 00:12:03.619 "compare_and_write": false, 00:12:03.619 "abort": true, 00:12:03.619 "seek_hole": false, 00:12:03.619 "seek_data": false, 00:12:03.619 "copy": true, 00:12:03.619 "nvme_iov_md": false 00:12:03.619 }, 00:12:03.619 "memory_domains": [ 00:12:03.619 { 00:12:03.619 "dma_device_id": "system", 00:12:03.619 "dma_device_type": 1 00:12:03.619 }, 00:12:03.619 { 00:12:03.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.619 "dma_device_type": 2 00:12:03.619 } 00:12:03.619 ], 00:12:03.619 "driver_specific": {} 00:12:03.619 } 00:12:03.619 ] 00:12:03.619 19:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.619 19:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:03.619 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:03.619 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:03.619 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:03.619 19:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.619 19:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.619 [2024-11-26 19:01:30.018232] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:03.619 [2024-11-26 19:01:30.018349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:03.619 [2024-11-26 19:01:30.018383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.619 [2024-11-26 19:01:30.021271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:03.619 [2024-11-26 19:01:30.021426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:03.619 19:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.619 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:03.619 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.619 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.619 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:03.619 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.619 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.619 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.619 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.619 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.620 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.620 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.620 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.620 19:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.620 19:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.620 19:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.620 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.620 "name": "Existed_Raid", 00:12:03.620 "uuid": "a083e1d2-243f-49c6-9849-135c2107ea96", 00:12:03.620 "strip_size_kb": 64, 00:12:03.620 "state": "configuring", 00:12:03.620 "raid_level": "concat", 00:12:03.620 "superblock": true, 00:12:03.620 "num_base_bdevs": 4, 00:12:03.620 "num_base_bdevs_discovered": 3, 00:12:03.620 "num_base_bdevs_operational": 4, 00:12:03.620 "base_bdevs_list": [ 00:12:03.620 { 00:12:03.620 "name": "BaseBdev1", 00:12:03.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.620 "is_configured": false, 00:12:03.620 "data_offset": 0, 00:12:03.620 "data_size": 0 00:12:03.620 }, 00:12:03.620 { 00:12:03.620 "name": "BaseBdev2", 00:12:03.620 "uuid": "55721de7-7b93-4c96-bd1e-ef64acb43b05", 00:12:03.620 "is_configured": true, 00:12:03.620 "data_offset": 2048, 00:12:03.620 "data_size": 63488 00:12:03.620 }, 00:12:03.620 { 00:12:03.620 "name": "BaseBdev3", 00:12:03.620 "uuid": "c8095ef0-8aa5-482a-8c2c-4922efd8302b", 00:12:03.620 "is_configured": true, 00:12:03.620 "data_offset": 2048, 00:12:03.620 "data_size": 63488 00:12:03.620 }, 00:12:03.620 { 00:12:03.620 "name": "BaseBdev4", 00:12:03.620 "uuid": "4ec9917c-21cb-470c-8968-4e575fb46a1a", 00:12:03.620 "is_configured": true, 00:12:03.620 "data_offset": 2048, 00:12:03.620 "data_size": 63488 00:12:03.620 } 00:12:03.620 ] 00:12:03.620 }' 00:12:03.620 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.620 19:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.187 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:04.187 19:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.187 19:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.187 [2024-11-26 19:01:30.542447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:04.187 19:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.187 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:04.187 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.187 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.187 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:04.187 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.187 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.187 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.187 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.187 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.187 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.187 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.187 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.187 19:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.187 19:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.188 19:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.188 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.188 "name": "Existed_Raid", 00:12:04.188 "uuid": "a083e1d2-243f-49c6-9849-135c2107ea96", 00:12:04.188 "strip_size_kb": 64, 00:12:04.188 "state": "configuring", 00:12:04.188 "raid_level": "concat", 00:12:04.188 "superblock": true, 00:12:04.188 "num_base_bdevs": 4, 00:12:04.188 "num_base_bdevs_discovered": 2, 00:12:04.188 "num_base_bdevs_operational": 4, 00:12:04.188 "base_bdevs_list": [ 00:12:04.188 { 00:12:04.188 "name": "BaseBdev1", 00:12:04.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.188 "is_configured": false, 00:12:04.188 "data_offset": 0, 00:12:04.188 "data_size": 0 00:12:04.188 }, 00:12:04.188 { 00:12:04.188 "name": null, 00:12:04.188 "uuid": "55721de7-7b93-4c96-bd1e-ef64acb43b05", 00:12:04.188 "is_configured": false, 00:12:04.188 "data_offset": 0, 00:12:04.188 "data_size": 63488 00:12:04.188 }, 00:12:04.188 { 00:12:04.188 "name": "BaseBdev3", 00:12:04.188 "uuid": "c8095ef0-8aa5-482a-8c2c-4922efd8302b", 00:12:04.188 "is_configured": true, 00:12:04.188 "data_offset": 2048, 00:12:04.188 "data_size": 63488 00:12:04.188 }, 00:12:04.188 { 00:12:04.188 "name": "BaseBdev4", 00:12:04.188 "uuid": "4ec9917c-21cb-470c-8968-4e575fb46a1a", 00:12:04.188 "is_configured": true, 00:12:04.188 "data_offset": 2048, 00:12:04.188 "data_size": 63488 00:12:04.188 } 00:12:04.188 ] 00:12:04.188 }' 00:12:04.188 19:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.188 19:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.756 [2024-11-26 19:01:31.169808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:04.756 BaseBdev1 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.756 [ 00:12:04.756 { 00:12:04.756 "name": "BaseBdev1", 00:12:04.756 "aliases": [ 00:12:04.756 "a14466fb-83f2-4366-9a24-e3a12c66c86e" 00:12:04.756 ], 00:12:04.756 "product_name": "Malloc disk", 00:12:04.756 "block_size": 512, 00:12:04.756 "num_blocks": 65536, 00:12:04.756 "uuid": "a14466fb-83f2-4366-9a24-e3a12c66c86e", 00:12:04.756 "assigned_rate_limits": { 00:12:04.756 "rw_ios_per_sec": 0, 00:12:04.756 "rw_mbytes_per_sec": 0, 00:12:04.756 "r_mbytes_per_sec": 0, 00:12:04.756 "w_mbytes_per_sec": 0 00:12:04.756 }, 00:12:04.756 "claimed": true, 00:12:04.756 "claim_type": "exclusive_write", 00:12:04.756 "zoned": false, 00:12:04.756 "supported_io_types": { 00:12:04.756 "read": true, 00:12:04.756 "write": true, 00:12:04.756 "unmap": true, 00:12:04.756 "flush": true, 00:12:04.756 "reset": true, 00:12:04.756 "nvme_admin": false, 00:12:04.756 "nvme_io": false, 00:12:04.756 "nvme_io_md": false, 00:12:04.756 "write_zeroes": true, 00:12:04.756 "zcopy": true, 00:12:04.756 "get_zone_info": false, 00:12:04.756 "zone_management": false, 00:12:04.756 "zone_append": false, 00:12:04.756 "compare": false, 00:12:04.756 "compare_and_write": false, 00:12:04.756 "abort": true, 00:12:04.756 "seek_hole": false, 00:12:04.756 "seek_data": false, 00:12:04.756 "copy": true, 00:12:04.756 "nvme_iov_md": false 00:12:04.756 }, 00:12:04.756 "memory_domains": [ 00:12:04.756 { 00:12:04.756 "dma_device_id": "system", 00:12:04.756 "dma_device_type": 1 00:12:04.756 }, 00:12:04.756 { 00:12:04.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.756 "dma_device_type": 2 00:12:04.756 } 00:12:04.756 ], 00:12:04.756 "driver_specific": {} 00:12:04.756 } 00:12:04.756 ] 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.756 "name": "Existed_Raid", 00:12:04.756 "uuid": "a083e1d2-243f-49c6-9849-135c2107ea96", 00:12:04.756 "strip_size_kb": 64, 00:12:04.756 "state": "configuring", 00:12:04.756 "raid_level": "concat", 00:12:04.756 "superblock": true, 00:12:04.756 "num_base_bdevs": 4, 00:12:04.756 "num_base_bdevs_discovered": 3, 00:12:04.756 "num_base_bdevs_operational": 4, 00:12:04.756 "base_bdevs_list": [ 00:12:04.756 { 00:12:04.756 "name": "BaseBdev1", 00:12:04.756 "uuid": "a14466fb-83f2-4366-9a24-e3a12c66c86e", 00:12:04.756 "is_configured": true, 00:12:04.756 "data_offset": 2048, 00:12:04.756 "data_size": 63488 00:12:04.756 }, 00:12:04.756 { 00:12:04.756 "name": null, 00:12:04.756 "uuid": "55721de7-7b93-4c96-bd1e-ef64acb43b05", 00:12:04.756 "is_configured": false, 00:12:04.756 "data_offset": 0, 00:12:04.756 "data_size": 63488 00:12:04.756 }, 00:12:04.756 { 00:12:04.756 "name": "BaseBdev3", 00:12:04.756 "uuid": "c8095ef0-8aa5-482a-8c2c-4922efd8302b", 00:12:04.756 "is_configured": true, 00:12:04.756 "data_offset": 2048, 00:12:04.756 "data_size": 63488 00:12:04.756 }, 00:12:04.756 { 00:12:04.756 "name": "BaseBdev4", 00:12:04.756 "uuid": "4ec9917c-21cb-470c-8968-4e575fb46a1a", 00:12:04.756 "is_configured": true, 00:12:04.756 "data_offset": 2048, 00:12:04.756 "data_size": 63488 00:12:04.756 } 00:12:04.756 ] 00:12:04.756 }' 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.756 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.322 [2024-11-26 19:01:31.778123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.322 "name": "Existed_Raid", 00:12:05.322 "uuid": "a083e1d2-243f-49c6-9849-135c2107ea96", 00:12:05.322 "strip_size_kb": 64, 00:12:05.322 "state": "configuring", 00:12:05.322 "raid_level": "concat", 00:12:05.322 "superblock": true, 00:12:05.322 "num_base_bdevs": 4, 00:12:05.322 "num_base_bdevs_discovered": 2, 00:12:05.322 "num_base_bdevs_operational": 4, 00:12:05.322 "base_bdevs_list": [ 00:12:05.322 { 00:12:05.322 "name": "BaseBdev1", 00:12:05.322 "uuid": "a14466fb-83f2-4366-9a24-e3a12c66c86e", 00:12:05.322 "is_configured": true, 00:12:05.322 "data_offset": 2048, 00:12:05.322 "data_size": 63488 00:12:05.322 }, 00:12:05.322 { 00:12:05.322 "name": null, 00:12:05.322 "uuid": "55721de7-7b93-4c96-bd1e-ef64acb43b05", 00:12:05.322 "is_configured": false, 00:12:05.322 "data_offset": 0, 00:12:05.322 "data_size": 63488 00:12:05.322 }, 00:12:05.322 { 00:12:05.322 "name": null, 00:12:05.322 "uuid": "c8095ef0-8aa5-482a-8c2c-4922efd8302b", 00:12:05.322 "is_configured": false, 00:12:05.322 "data_offset": 0, 00:12:05.322 "data_size": 63488 00:12:05.322 }, 00:12:05.322 { 00:12:05.322 "name": "BaseBdev4", 00:12:05.322 "uuid": "4ec9917c-21cb-470c-8968-4e575fb46a1a", 00:12:05.322 "is_configured": true, 00:12:05.322 "data_offset": 2048, 00:12:05.322 "data_size": 63488 00:12:05.322 } 00:12:05.322 ] 00:12:05.322 }' 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.322 19:01:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.897 19:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.898 [2024-11-26 19:01:32.342197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.898 "name": "Existed_Raid", 00:12:05.898 "uuid": "a083e1d2-243f-49c6-9849-135c2107ea96", 00:12:05.898 "strip_size_kb": 64, 00:12:05.898 "state": "configuring", 00:12:05.898 "raid_level": "concat", 00:12:05.898 "superblock": true, 00:12:05.898 "num_base_bdevs": 4, 00:12:05.898 "num_base_bdevs_discovered": 3, 00:12:05.898 "num_base_bdevs_operational": 4, 00:12:05.898 "base_bdevs_list": [ 00:12:05.898 { 00:12:05.898 "name": "BaseBdev1", 00:12:05.898 "uuid": "a14466fb-83f2-4366-9a24-e3a12c66c86e", 00:12:05.898 "is_configured": true, 00:12:05.898 "data_offset": 2048, 00:12:05.898 "data_size": 63488 00:12:05.898 }, 00:12:05.898 { 00:12:05.898 "name": null, 00:12:05.898 "uuid": "55721de7-7b93-4c96-bd1e-ef64acb43b05", 00:12:05.898 "is_configured": false, 00:12:05.898 "data_offset": 0, 00:12:05.898 "data_size": 63488 00:12:05.898 }, 00:12:05.898 { 00:12:05.898 "name": "BaseBdev3", 00:12:05.898 "uuid": "c8095ef0-8aa5-482a-8c2c-4922efd8302b", 00:12:05.898 "is_configured": true, 00:12:05.898 "data_offset": 2048, 00:12:05.898 "data_size": 63488 00:12:05.898 }, 00:12:05.898 { 00:12:05.898 "name": "BaseBdev4", 00:12:05.898 "uuid": "4ec9917c-21cb-470c-8968-4e575fb46a1a", 00:12:05.898 "is_configured": true, 00:12:05.898 "data_offset": 2048, 00:12:05.898 "data_size": 63488 00:12:05.898 } 00:12:05.898 ] 00:12:05.898 }' 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.898 19:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.471 19:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:06.471 19:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.471 19:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.471 19:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.471 19:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.471 19:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:06.471 19:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:06.471 19:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.471 19:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.471 [2024-11-26 19:01:32.930440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:06.471 19:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.471 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:06.471 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.471 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.471 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:06.471 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.471 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.471 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.471 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.471 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.471 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.471 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.471 19:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.471 19:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.471 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.471 19:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.471 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.471 "name": "Existed_Raid", 00:12:06.471 "uuid": "a083e1d2-243f-49c6-9849-135c2107ea96", 00:12:06.471 "strip_size_kb": 64, 00:12:06.471 "state": "configuring", 00:12:06.471 "raid_level": "concat", 00:12:06.471 "superblock": true, 00:12:06.471 "num_base_bdevs": 4, 00:12:06.471 "num_base_bdevs_discovered": 2, 00:12:06.471 "num_base_bdevs_operational": 4, 00:12:06.471 "base_bdevs_list": [ 00:12:06.471 { 00:12:06.471 "name": null, 00:12:06.471 "uuid": "a14466fb-83f2-4366-9a24-e3a12c66c86e", 00:12:06.471 "is_configured": false, 00:12:06.471 "data_offset": 0, 00:12:06.471 "data_size": 63488 00:12:06.471 }, 00:12:06.471 { 00:12:06.471 "name": null, 00:12:06.471 "uuid": "55721de7-7b93-4c96-bd1e-ef64acb43b05", 00:12:06.471 "is_configured": false, 00:12:06.471 "data_offset": 0, 00:12:06.471 "data_size": 63488 00:12:06.471 }, 00:12:06.471 { 00:12:06.471 "name": "BaseBdev3", 00:12:06.471 "uuid": "c8095ef0-8aa5-482a-8c2c-4922efd8302b", 00:12:06.471 "is_configured": true, 00:12:06.471 "data_offset": 2048, 00:12:06.471 "data_size": 63488 00:12:06.471 }, 00:12:06.471 { 00:12:06.471 "name": "BaseBdev4", 00:12:06.471 "uuid": "4ec9917c-21cb-470c-8968-4e575fb46a1a", 00:12:06.471 "is_configured": true, 00:12:06.471 "data_offset": 2048, 00:12:06.471 "data_size": 63488 00:12:06.471 } 00:12:06.471 ] 00:12:06.471 }' 00:12:06.471 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.471 19:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.038 [2024-11-26 19:01:33.610082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.038 19:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.295 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.295 "name": "Existed_Raid", 00:12:07.295 "uuid": "a083e1d2-243f-49c6-9849-135c2107ea96", 00:12:07.295 "strip_size_kb": 64, 00:12:07.295 "state": "configuring", 00:12:07.295 "raid_level": "concat", 00:12:07.295 "superblock": true, 00:12:07.295 "num_base_bdevs": 4, 00:12:07.295 "num_base_bdevs_discovered": 3, 00:12:07.295 "num_base_bdevs_operational": 4, 00:12:07.295 "base_bdevs_list": [ 00:12:07.295 { 00:12:07.295 "name": null, 00:12:07.295 "uuid": "a14466fb-83f2-4366-9a24-e3a12c66c86e", 00:12:07.296 "is_configured": false, 00:12:07.296 "data_offset": 0, 00:12:07.296 "data_size": 63488 00:12:07.296 }, 00:12:07.296 { 00:12:07.296 "name": "BaseBdev2", 00:12:07.296 "uuid": "55721de7-7b93-4c96-bd1e-ef64acb43b05", 00:12:07.296 "is_configured": true, 00:12:07.296 "data_offset": 2048, 00:12:07.296 "data_size": 63488 00:12:07.296 }, 00:12:07.296 { 00:12:07.296 "name": "BaseBdev3", 00:12:07.296 "uuid": "c8095ef0-8aa5-482a-8c2c-4922efd8302b", 00:12:07.296 "is_configured": true, 00:12:07.296 "data_offset": 2048, 00:12:07.296 "data_size": 63488 00:12:07.296 }, 00:12:07.296 { 00:12:07.296 "name": "BaseBdev4", 00:12:07.296 "uuid": "4ec9917c-21cb-470c-8968-4e575fb46a1a", 00:12:07.296 "is_configured": true, 00:12:07.296 "data_offset": 2048, 00:12:07.296 "data_size": 63488 00:12:07.296 } 00:12:07.296 ] 00:12:07.296 }' 00:12:07.296 19:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.296 19:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.553 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:07.553 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.553 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.553 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.553 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a14466fb-83f2-4366-9a24-e3a12c66c86e 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.811 [2024-11-26 19:01:34.282971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:07.811 [2024-11-26 19:01:34.283272] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:07.811 [2024-11-26 19:01:34.283306] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:07.811 [2024-11-26 19:01:34.283724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:07.811 NewBaseBdev 00:12:07.811 [2024-11-26 19:01:34.284129] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:07.811 [2024-11-26 19:01:34.284262] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, ra 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.811 id_bdev 0x617000008200 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:07.811 [2024-11-26 19:01:34.284606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.811 [ 00:12:07.811 { 00:12:07.811 "name": "NewBaseBdev", 00:12:07.811 "aliases": [ 00:12:07.811 "a14466fb-83f2-4366-9a24-e3a12c66c86e" 00:12:07.811 ], 00:12:07.811 "product_name": "Malloc disk", 00:12:07.811 "block_size": 512, 00:12:07.811 "num_blocks": 65536, 00:12:07.811 "uuid": "a14466fb-83f2-4366-9a24-e3a12c66c86e", 00:12:07.811 "assigned_rate_limits": { 00:12:07.811 "rw_ios_per_sec": 0, 00:12:07.811 "rw_mbytes_per_sec": 0, 00:12:07.811 "r_mbytes_per_sec": 0, 00:12:07.811 "w_mbytes_per_sec": 0 00:12:07.811 }, 00:12:07.811 "claimed": true, 00:12:07.811 "claim_type": "exclusive_write", 00:12:07.811 "zoned": false, 00:12:07.811 "supported_io_types": { 00:12:07.811 "read": true, 00:12:07.811 "write": true, 00:12:07.811 "unmap": true, 00:12:07.811 "flush": true, 00:12:07.811 "reset": true, 00:12:07.811 "nvme_admin": false, 00:12:07.811 "nvme_io": false, 00:12:07.811 "nvme_io_md": false, 00:12:07.811 "write_zeroes": true, 00:12:07.811 "zcopy": true, 00:12:07.811 "get_zone_info": false, 00:12:07.811 "zone_management": false, 00:12:07.811 "zone_append": false, 00:12:07.811 "compare": false, 00:12:07.811 "compare_and_write": false, 00:12:07.811 "abort": true, 00:12:07.811 "seek_hole": false, 00:12:07.811 "seek_data": false, 00:12:07.811 "copy": true, 00:12:07.811 "nvme_iov_md": false 00:12:07.811 }, 00:12:07.811 "memory_domains": [ 00:12:07.811 { 00:12:07.811 "dma_device_id": "system", 00:12:07.811 "dma_device_type": 1 00:12:07.811 }, 00:12:07.811 { 00:12:07.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.811 "dma_device_type": 2 00:12:07.811 } 00:12:07.811 ], 00:12:07.811 "driver_specific": {} 00:12:07.811 } 00:12:07.811 ] 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:07.811 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:07.812 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.812 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.812 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.812 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.812 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.812 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.812 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.812 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.812 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.812 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.812 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.812 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.812 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.812 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.812 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.812 "name": "Existed_Raid", 00:12:07.812 "uuid": "a083e1d2-243f-49c6-9849-135c2107ea96", 00:12:07.812 "strip_size_kb": 64, 00:12:07.812 "state": "online", 00:12:07.812 "raid_level": "concat", 00:12:07.812 "superblock": true, 00:12:07.812 "num_base_bdevs": 4, 00:12:07.812 "num_base_bdevs_discovered": 4, 00:12:07.812 "num_base_bdevs_operational": 4, 00:12:07.812 "base_bdevs_list": [ 00:12:07.812 { 00:12:07.812 "name": "NewBaseBdev", 00:12:07.812 "uuid": "a14466fb-83f2-4366-9a24-e3a12c66c86e", 00:12:07.812 "is_configured": true, 00:12:07.812 "data_offset": 2048, 00:12:07.812 "data_size": 63488 00:12:07.812 }, 00:12:07.812 { 00:12:07.812 "name": "BaseBdev2", 00:12:07.812 "uuid": "55721de7-7b93-4c96-bd1e-ef64acb43b05", 00:12:07.812 "is_configured": true, 00:12:07.812 "data_offset": 2048, 00:12:07.812 "data_size": 63488 00:12:07.812 }, 00:12:07.812 { 00:12:07.812 "name": "BaseBdev3", 00:12:07.812 "uuid": "c8095ef0-8aa5-482a-8c2c-4922efd8302b", 00:12:07.812 "is_configured": true, 00:12:07.812 "data_offset": 2048, 00:12:07.812 "data_size": 63488 00:12:07.812 }, 00:12:07.812 { 00:12:07.812 "name": "BaseBdev4", 00:12:07.812 "uuid": "4ec9917c-21cb-470c-8968-4e575fb46a1a", 00:12:07.812 "is_configured": true, 00:12:07.812 "data_offset": 2048, 00:12:07.812 "data_size": 63488 00:12:07.812 } 00:12:07.812 ] 00:12:07.812 }' 00:12:07.812 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.812 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.378 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:08.378 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:08.378 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:08.378 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:08.378 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:08.378 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:08.378 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:08.378 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.378 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:08.378 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.378 [2024-11-26 19:01:34.839683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:08.378 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.378 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:08.378 "name": "Existed_Raid", 00:12:08.378 "aliases": [ 00:12:08.378 "a083e1d2-243f-49c6-9849-135c2107ea96" 00:12:08.378 ], 00:12:08.378 "product_name": "Raid Volume", 00:12:08.378 "block_size": 512, 00:12:08.378 "num_blocks": 253952, 00:12:08.378 "uuid": "a083e1d2-243f-49c6-9849-135c2107ea96", 00:12:08.378 "assigned_rate_limits": { 00:12:08.378 "rw_ios_per_sec": 0, 00:12:08.378 "rw_mbytes_per_sec": 0, 00:12:08.378 "r_mbytes_per_sec": 0, 00:12:08.378 "w_mbytes_per_sec": 0 00:12:08.378 }, 00:12:08.378 "claimed": false, 00:12:08.378 "zoned": false, 00:12:08.378 "supported_io_types": { 00:12:08.378 "read": true, 00:12:08.378 "write": true, 00:12:08.378 "unmap": true, 00:12:08.378 "flush": true, 00:12:08.378 "reset": true, 00:12:08.378 "nvme_admin": false, 00:12:08.378 "nvme_io": false, 00:12:08.378 "nvme_io_md": false, 00:12:08.378 "write_zeroes": true, 00:12:08.378 "zcopy": false, 00:12:08.378 "get_zone_info": false, 00:12:08.378 "zone_management": false, 00:12:08.378 "zone_append": false, 00:12:08.378 "compare": false, 00:12:08.378 "compare_and_write": false, 00:12:08.378 "abort": false, 00:12:08.378 "seek_hole": false, 00:12:08.378 "seek_data": false, 00:12:08.378 "copy": false, 00:12:08.378 "nvme_iov_md": false 00:12:08.378 }, 00:12:08.378 "memory_domains": [ 00:12:08.378 { 00:12:08.378 "dma_device_id": "system", 00:12:08.378 "dma_device_type": 1 00:12:08.378 }, 00:12:08.378 { 00:12:08.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.379 "dma_device_type": 2 00:12:08.379 }, 00:12:08.379 { 00:12:08.379 "dma_device_id": "system", 00:12:08.379 "dma_device_type": 1 00:12:08.379 }, 00:12:08.379 { 00:12:08.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.379 "dma_device_type": 2 00:12:08.379 }, 00:12:08.379 { 00:12:08.379 "dma_device_id": "system", 00:12:08.379 "dma_device_type": 1 00:12:08.379 }, 00:12:08.379 { 00:12:08.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.379 "dma_device_type": 2 00:12:08.379 }, 00:12:08.379 { 00:12:08.379 "dma_device_id": "system", 00:12:08.379 "dma_device_type": 1 00:12:08.379 }, 00:12:08.379 { 00:12:08.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.379 "dma_device_type": 2 00:12:08.379 } 00:12:08.379 ], 00:12:08.379 "driver_specific": { 00:12:08.379 "raid": { 00:12:08.379 "uuid": "a083e1d2-243f-49c6-9849-135c2107ea96", 00:12:08.379 "strip_size_kb": 64, 00:12:08.379 "state": "online", 00:12:08.379 "raid_level": "concat", 00:12:08.379 "superblock": true, 00:12:08.379 "num_base_bdevs": 4, 00:12:08.379 "num_base_bdevs_discovered": 4, 00:12:08.379 "num_base_bdevs_operational": 4, 00:12:08.379 "base_bdevs_list": [ 00:12:08.379 { 00:12:08.379 "name": "NewBaseBdev", 00:12:08.379 "uuid": "a14466fb-83f2-4366-9a24-e3a12c66c86e", 00:12:08.379 "is_configured": true, 00:12:08.379 "data_offset": 2048, 00:12:08.379 "data_size": 63488 00:12:08.379 }, 00:12:08.379 { 00:12:08.379 "name": "BaseBdev2", 00:12:08.379 "uuid": "55721de7-7b93-4c96-bd1e-ef64acb43b05", 00:12:08.379 "is_configured": true, 00:12:08.379 "data_offset": 2048, 00:12:08.379 "data_size": 63488 00:12:08.379 }, 00:12:08.379 { 00:12:08.379 "name": "BaseBdev3", 00:12:08.379 "uuid": "c8095ef0-8aa5-482a-8c2c-4922efd8302b", 00:12:08.379 "is_configured": true, 00:12:08.379 "data_offset": 2048, 00:12:08.379 "data_size": 63488 00:12:08.379 }, 00:12:08.379 { 00:12:08.379 "name": "BaseBdev4", 00:12:08.379 "uuid": "4ec9917c-21cb-470c-8968-4e575fb46a1a", 00:12:08.379 "is_configured": true, 00:12:08.379 "data_offset": 2048, 00:12:08.379 "data_size": 63488 00:12:08.379 } 00:12:08.379 ] 00:12:08.379 } 00:12:08.379 } 00:12:08.379 }' 00:12:08.379 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:08.379 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:08.379 BaseBdev2 00:12:08.379 BaseBdev3 00:12:08.379 BaseBdev4' 00:12:08.379 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.379 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:08.379 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.379 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:08.379 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.379 19:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.379 19:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.637 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.637 [2024-11-26 19:01:35.207256] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:08.637 [2024-11-26 19:01:35.207298] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:08.637 [2024-11-26 19:01:35.207550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.638 [2024-11-26 19:01:35.207666] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:08.638 [2024-11-26 19:01:35.207685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:08.638 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.638 19:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72458 00:12:08.638 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72458 ']' 00:12:08.638 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72458 00:12:08.638 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:08.638 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:08.638 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72458 00:12:08.638 killing process with pid 72458 00:12:08.638 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:08.638 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:08.638 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72458' 00:12:08.638 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72458 00:12:08.638 [2024-11-26 19:01:35.241189] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:08.638 19:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72458 00:12:09.205 [2024-11-26 19:01:35.612278] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:10.582 ************************************ 00:12:10.582 END TEST raid_state_function_test_sb 00:12:10.582 ************************************ 00:12:10.582 19:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:10.582 00:12:10.582 real 0m13.191s 00:12:10.582 user 0m21.726s 00:12:10.582 sys 0m1.876s 00:12:10.582 19:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.582 19:01:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.582 19:01:36 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:12:10.582 19:01:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:10.582 19:01:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.582 19:01:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:10.582 ************************************ 00:12:10.582 START TEST raid_superblock_test 00:12:10.582 ************************************ 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73145 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73145 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73145 ']' 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:10.582 19:01:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.582 [2024-11-26 19:01:36.951835] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:12:10.582 [2024-11-26 19:01:36.952083] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73145 ] 00:12:10.582 [2024-11-26 19:01:37.141705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.841 [2024-11-26 19:01:37.284537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.099 [2024-11-26 19:01:37.500799] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:11.099 [2024-11-26 19:01:37.500839] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.357 malloc1 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.357 [2024-11-26 19:01:37.946313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:11.357 [2024-11-26 19:01:37.946535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.357 [2024-11-26 19:01:37.946625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:11.357 [2024-11-26 19:01:37.946649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.357 [2024-11-26 19:01:37.949697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.357 [2024-11-26 19:01:37.949907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:11.357 pt1 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.357 19:01:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.617 malloc2 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.617 [2024-11-26 19:01:38.010529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:11.617 [2024-11-26 19:01:38.010607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.617 [2024-11-26 19:01:38.010649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:11.617 [2024-11-26 19:01:38.010666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.617 [2024-11-26 19:01:38.013699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.617 [2024-11-26 19:01:38.013894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:11.617 pt2 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.617 malloc3 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.617 [2024-11-26 19:01:38.082794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:11.617 [2024-11-26 19:01:38.082864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.617 [2024-11-26 19:01:38.082900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:11.617 [2024-11-26 19:01:38.082917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.617 [2024-11-26 19:01:38.085846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.617 [2024-11-26 19:01:38.086018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:11.617 pt3 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.617 malloc4 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.617 [2024-11-26 19:01:38.145731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:11.617 [2024-11-26 19:01:38.145962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.617 [2024-11-26 19:01:38.146005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:11.617 [2024-11-26 19:01:38.146021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.617 [2024-11-26 19:01:38.149061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.617 [2024-11-26 19:01:38.149220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:11.617 pt4 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.617 [2024-11-26 19:01:38.157900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:11.617 [2024-11-26 19:01:38.160781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:11.617 [2024-11-26 19:01:38.161055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:11.617 [2024-11-26 19:01:38.161176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:11.617 [2024-11-26 19:01:38.161505] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:11.617 [2024-11-26 19:01:38.161633] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:11.617 [2024-11-26 19:01:38.162083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:11.617 [2024-11-26 19:01:38.162463] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:11.617 [2024-11-26 19:01:38.162494] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:11.617 [2024-11-26 19:01:38.162735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.617 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.617 "name": "raid_bdev1", 00:12:11.617 "uuid": "88818123-bb75-4340-a8be-60b4ff517fe8", 00:12:11.617 "strip_size_kb": 64, 00:12:11.617 "state": "online", 00:12:11.617 "raid_level": "concat", 00:12:11.617 "superblock": true, 00:12:11.617 "num_base_bdevs": 4, 00:12:11.617 "num_base_bdevs_discovered": 4, 00:12:11.617 "num_base_bdevs_operational": 4, 00:12:11.617 "base_bdevs_list": [ 00:12:11.617 { 00:12:11.617 "name": "pt1", 00:12:11.617 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:11.617 "is_configured": true, 00:12:11.617 "data_offset": 2048, 00:12:11.617 "data_size": 63488 00:12:11.617 }, 00:12:11.617 { 00:12:11.617 "name": "pt2", 00:12:11.618 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:11.618 "is_configured": true, 00:12:11.618 "data_offset": 2048, 00:12:11.618 "data_size": 63488 00:12:11.618 }, 00:12:11.618 { 00:12:11.618 "name": "pt3", 00:12:11.618 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:11.618 "is_configured": true, 00:12:11.618 "data_offset": 2048, 00:12:11.618 "data_size": 63488 00:12:11.618 }, 00:12:11.618 { 00:12:11.618 "name": "pt4", 00:12:11.618 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:11.618 "is_configured": true, 00:12:11.618 "data_offset": 2048, 00:12:11.618 "data_size": 63488 00:12:11.618 } 00:12:11.618 ] 00:12:11.618 }' 00:12:11.618 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.618 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.185 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:12.185 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:12.185 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:12.185 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:12.185 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:12.185 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:12.185 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:12.185 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.185 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.185 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:12.185 [2024-11-26 19:01:38.687299] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.185 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.185 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:12.185 "name": "raid_bdev1", 00:12:12.185 "aliases": [ 00:12:12.185 "88818123-bb75-4340-a8be-60b4ff517fe8" 00:12:12.185 ], 00:12:12.185 "product_name": "Raid Volume", 00:12:12.185 "block_size": 512, 00:12:12.185 "num_blocks": 253952, 00:12:12.185 "uuid": "88818123-bb75-4340-a8be-60b4ff517fe8", 00:12:12.185 "assigned_rate_limits": { 00:12:12.185 "rw_ios_per_sec": 0, 00:12:12.185 "rw_mbytes_per_sec": 0, 00:12:12.185 "r_mbytes_per_sec": 0, 00:12:12.185 "w_mbytes_per_sec": 0 00:12:12.185 }, 00:12:12.185 "claimed": false, 00:12:12.185 "zoned": false, 00:12:12.185 "supported_io_types": { 00:12:12.185 "read": true, 00:12:12.185 "write": true, 00:12:12.185 "unmap": true, 00:12:12.185 "flush": true, 00:12:12.185 "reset": true, 00:12:12.185 "nvme_admin": false, 00:12:12.185 "nvme_io": false, 00:12:12.185 "nvme_io_md": false, 00:12:12.185 "write_zeroes": true, 00:12:12.185 "zcopy": false, 00:12:12.185 "get_zone_info": false, 00:12:12.185 "zone_management": false, 00:12:12.185 "zone_append": false, 00:12:12.185 "compare": false, 00:12:12.185 "compare_and_write": false, 00:12:12.185 "abort": false, 00:12:12.185 "seek_hole": false, 00:12:12.185 "seek_data": false, 00:12:12.185 "copy": false, 00:12:12.185 "nvme_iov_md": false 00:12:12.185 }, 00:12:12.185 "memory_domains": [ 00:12:12.185 { 00:12:12.185 "dma_device_id": "system", 00:12:12.185 "dma_device_type": 1 00:12:12.185 }, 00:12:12.185 { 00:12:12.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.185 "dma_device_type": 2 00:12:12.185 }, 00:12:12.185 { 00:12:12.185 "dma_device_id": "system", 00:12:12.185 "dma_device_type": 1 00:12:12.185 }, 00:12:12.185 { 00:12:12.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.185 "dma_device_type": 2 00:12:12.185 }, 00:12:12.185 { 00:12:12.185 "dma_device_id": "system", 00:12:12.185 "dma_device_type": 1 00:12:12.185 }, 00:12:12.185 { 00:12:12.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.185 "dma_device_type": 2 00:12:12.185 }, 00:12:12.185 { 00:12:12.185 "dma_device_id": "system", 00:12:12.185 "dma_device_type": 1 00:12:12.185 }, 00:12:12.185 { 00:12:12.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.185 "dma_device_type": 2 00:12:12.185 } 00:12:12.185 ], 00:12:12.185 "driver_specific": { 00:12:12.185 "raid": { 00:12:12.185 "uuid": "88818123-bb75-4340-a8be-60b4ff517fe8", 00:12:12.185 "strip_size_kb": 64, 00:12:12.185 "state": "online", 00:12:12.185 "raid_level": "concat", 00:12:12.185 "superblock": true, 00:12:12.185 "num_base_bdevs": 4, 00:12:12.185 "num_base_bdevs_discovered": 4, 00:12:12.185 "num_base_bdevs_operational": 4, 00:12:12.185 "base_bdevs_list": [ 00:12:12.185 { 00:12:12.185 "name": "pt1", 00:12:12.185 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:12.185 "is_configured": true, 00:12:12.185 "data_offset": 2048, 00:12:12.185 "data_size": 63488 00:12:12.185 }, 00:12:12.185 { 00:12:12.185 "name": "pt2", 00:12:12.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:12.185 "is_configured": true, 00:12:12.185 "data_offset": 2048, 00:12:12.185 "data_size": 63488 00:12:12.185 }, 00:12:12.185 { 00:12:12.185 "name": "pt3", 00:12:12.185 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:12.185 "is_configured": true, 00:12:12.185 "data_offset": 2048, 00:12:12.185 "data_size": 63488 00:12:12.185 }, 00:12:12.185 { 00:12:12.185 "name": "pt4", 00:12:12.185 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:12.185 "is_configured": true, 00:12:12.185 "data_offset": 2048, 00:12:12.185 "data_size": 63488 00:12:12.185 } 00:12:12.185 ] 00:12:12.185 } 00:12:12.185 } 00:12:12.185 }' 00:12:12.185 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:12.185 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:12.185 pt2 00:12:12.185 pt3 00:12:12.185 pt4' 00:12:12.185 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.444 19:01:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.444 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.445 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.445 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.445 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:12.445 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.445 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.445 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.445 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.445 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.445 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.445 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:12.445 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:12.445 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.445 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.714 [2024-11-26 19:01:39.067375] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.714 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.714 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=88818123-bb75-4340-a8be-60b4ff517fe8 00:12:12.714 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 88818123-bb75-4340-a8be-60b4ff517fe8 ']' 00:12:12.714 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:12.714 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.714 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.714 [2024-11-26 19:01:39.119023] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:12.714 [2024-11-26 19:01:39.119164] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.714 [2024-11-26 19:01:39.119405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.714 [2024-11-26 19:01:39.119621] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.714 [2024-11-26 19:01:39.119780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:12.714 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.714 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:12.714 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.714 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.714 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.714 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.714 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:12.714 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:12.714 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.715 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.715 [2024-11-26 19:01:39.275077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:12.715 [2024-11-26 19:01:39.277850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:12.715 [2024-11-26 19:01:39.277920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:12.715 [2024-11-26 19:01:39.277974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:12.715 [2024-11-26 19:01:39.278051] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:12.715 [2024-11-26 19:01:39.278122] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:12.715 [2024-11-26 19:01:39.278155] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:12.715 [2024-11-26 19:01:39.278188] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:12.715 [2024-11-26 19:01:39.278211] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:12.715 [2024-11-26 19:01:39.278228] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:12.715 request: 00:12:12.715 { 00:12:12.715 "name": "raid_bdev1", 00:12:12.715 "raid_level": "concat", 00:12:12.715 "base_bdevs": [ 00:12:12.715 "malloc1", 00:12:12.715 "malloc2", 00:12:12.715 "malloc3", 00:12:12.715 "malloc4" 00:12:12.715 ], 00:12:12.715 "strip_size_kb": 64, 00:12:12.716 "superblock": false, 00:12:12.716 "method": "bdev_raid_create", 00:12:12.716 "req_id": 1 00:12:12.716 } 00:12:12.716 Got JSON-RPC error response 00:12:12.716 response: 00:12:12.716 { 00:12:12.716 "code": -17, 00:12:12.716 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:12.716 } 00:12:12.716 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:12.716 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:12.716 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:12.716 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:12.716 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:12.716 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.716 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:12.716 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.716 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.716 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.983 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:12.983 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:12.983 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:12.983 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.983 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.983 [2024-11-26 19:01:39.343070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:12.983 [2024-11-26 19:01:39.343314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.983 [2024-11-26 19:01:39.343389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:12.983 [2024-11-26 19:01:39.343514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.983 [2024-11-26 19:01:39.346769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.983 [2024-11-26 19:01:39.346986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:12.983 [2024-11-26 19:01:39.347191] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:12.983 [2024-11-26 19:01:39.347460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:12.983 pt1 00:12:12.983 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.983 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:12.983 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.983 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.983 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:12.983 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.983 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.984 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.984 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.984 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.984 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.984 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.984 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.984 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.984 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.984 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.984 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.984 "name": "raid_bdev1", 00:12:12.984 "uuid": "88818123-bb75-4340-a8be-60b4ff517fe8", 00:12:12.984 "strip_size_kb": 64, 00:12:12.984 "state": "configuring", 00:12:12.984 "raid_level": "concat", 00:12:12.984 "superblock": true, 00:12:12.984 "num_base_bdevs": 4, 00:12:12.984 "num_base_bdevs_discovered": 1, 00:12:12.984 "num_base_bdevs_operational": 4, 00:12:12.984 "base_bdevs_list": [ 00:12:12.984 { 00:12:12.984 "name": "pt1", 00:12:12.984 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:12.984 "is_configured": true, 00:12:12.984 "data_offset": 2048, 00:12:12.984 "data_size": 63488 00:12:12.984 }, 00:12:12.984 { 00:12:12.984 "name": null, 00:12:12.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:12.984 "is_configured": false, 00:12:12.984 "data_offset": 2048, 00:12:12.984 "data_size": 63488 00:12:12.984 }, 00:12:12.984 { 00:12:12.984 "name": null, 00:12:12.984 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:12.984 "is_configured": false, 00:12:12.984 "data_offset": 2048, 00:12:12.984 "data_size": 63488 00:12:12.984 }, 00:12:12.984 { 00:12:12.984 "name": null, 00:12:12.984 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:12.984 "is_configured": false, 00:12:12.984 "data_offset": 2048, 00:12:12.984 "data_size": 63488 00:12:12.984 } 00:12:12.984 ] 00:12:12.984 }' 00:12:12.984 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.984 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.242 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.501 [2024-11-26 19:01:39.871489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:13.501 [2024-11-26 19:01:39.871598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.501 [2024-11-26 19:01:39.871655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:13.501 [2024-11-26 19:01:39.871704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.501 [2024-11-26 19:01:39.872351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.501 [2024-11-26 19:01:39.872420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:13.501 [2024-11-26 19:01:39.872543] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:13.501 [2024-11-26 19:01:39.872591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:13.501 pt2 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.501 [2024-11-26 19:01:39.879436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.501 "name": "raid_bdev1", 00:12:13.501 "uuid": "88818123-bb75-4340-a8be-60b4ff517fe8", 00:12:13.501 "strip_size_kb": 64, 00:12:13.501 "state": "configuring", 00:12:13.501 "raid_level": "concat", 00:12:13.501 "superblock": true, 00:12:13.501 "num_base_bdevs": 4, 00:12:13.501 "num_base_bdevs_discovered": 1, 00:12:13.501 "num_base_bdevs_operational": 4, 00:12:13.501 "base_bdevs_list": [ 00:12:13.501 { 00:12:13.501 "name": "pt1", 00:12:13.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:13.501 "is_configured": true, 00:12:13.501 "data_offset": 2048, 00:12:13.501 "data_size": 63488 00:12:13.501 }, 00:12:13.501 { 00:12:13.501 "name": null, 00:12:13.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:13.501 "is_configured": false, 00:12:13.501 "data_offset": 0, 00:12:13.501 "data_size": 63488 00:12:13.501 }, 00:12:13.501 { 00:12:13.501 "name": null, 00:12:13.501 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:13.501 "is_configured": false, 00:12:13.501 "data_offset": 2048, 00:12:13.501 "data_size": 63488 00:12:13.501 }, 00:12:13.501 { 00:12:13.501 "name": null, 00:12:13.501 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:13.501 "is_configured": false, 00:12:13.501 "data_offset": 2048, 00:12:13.501 "data_size": 63488 00:12:13.501 } 00:12:13.501 ] 00:12:13.501 }' 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.501 19:01:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.068 [2024-11-26 19:01:40.403668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:14.068 [2024-11-26 19:01:40.403783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.068 [2024-11-26 19:01:40.403816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:14.068 [2024-11-26 19:01:40.403831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.068 [2024-11-26 19:01:40.404461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.068 [2024-11-26 19:01:40.404488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:14.068 [2024-11-26 19:01:40.404616] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:14.068 [2024-11-26 19:01:40.404649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:14.068 pt2 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.068 [2024-11-26 19:01:40.411585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:14.068 [2024-11-26 19:01:40.411656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.068 [2024-11-26 19:01:40.411698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:14.068 [2024-11-26 19:01:40.411711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.068 [2024-11-26 19:01:40.412103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.068 [2024-11-26 19:01:40.412133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:14.068 [2024-11-26 19:01:40.412204] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:14.068 [2024-11-26 19:01:40.412235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:14.068 pt3 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.068 [2024-11-26 19:01:40.419562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:14.068 [2024-11-26 19:01:40.419627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.068 [2024-11-26 19:01:40.419666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:14.068 [2024-11-26 19:01:40.419679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.068 [2024-11-26 19:01:40.420116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.068 [2024-11-26 19:01:40.420144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:14.068 [2024-11-26 19:01:40.420249] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:14.068 [2024-11-26 19:01:40.420278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:14.068 [2024-11-26 19:01:40.420496] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:14.068 [2024-11-26 19:01:40.420513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:14.068 [2024-11-26 19:01:40.420884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:14.068 [2024-11-26 19:01:40.421122] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:14.068 [2024-11-26 19:01:40.421145] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:14.068 [2024-11-26 19:01:40.421324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.068 pt4 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.068 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.068 "name": "raid_bdev1", 00:12:14.068 "uuid": "88818123-bb75-4340-a8be-60b4ff517fe8", 00:12:14.068 "strip_size_kb": 64, 00:12:14.068 "state": "online", 00:12:14.068 "raid_level": "concat", 00:12:14.068 "superblock": true, 00:12:14.068 "num_base_bdevs": 4, 00:12:14.068 "num_base_bdevs_discovered": 4, 00:12:14.068 "num_base_bdevs_operational": 4, 00:12:14.068 "base_bdevs_list": [ 00:12:14.068 { 00:12:14.068 "name": "pt1", 00:12:14.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:14.068 "is_configured": true, 00:12:14.068 "data_offset": 2048, 00:12:14.068 "data_size": 63488 00:12:14.068 }, 00:12:14.068 { 00:12:14.068 "name": "pt2", 00:12:14.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:14.068 "is_configured": true, 00:12:14.068 "data_offset": 2048, 00:12:14.068 "data_size": 63488 00:12:14.068 }, 00:12:14.068 { 00:12:14.068 "name": "pt3", 00:12:14.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:14.068 "is_configured": true, 00:12:14.068 "data_offset": 2048, 00:12:14.068 "data_size": 63488 00:12:14.068 }, 00:12:14.068 { 00:12:14.068 "name": "pt4", 00:12:14.069 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:14.069 "is_configured": true, 00:12:14.069 "data_offset": 2048, 00:12:14.069 "data_size": 63488 00:12:14.069 } 00:12:14.069 ] 00:12:14.069 }' 00:12:14.069 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.069 19:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.327 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:14.327 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:14.327 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:14.327 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:14.327 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:14.327 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:14.586 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:14.586 19:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.586 19:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.586 19:01:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:14.586 [2024-11-26 19:01:40.956314] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.586 19:01:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.586 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:14.586 "name": "raid_bdev1", 00:12:14.586 "aliases": [ 00:12:14.586 "88818123-bb75-4340-a8be-60b4ff517fe8" 00:12:14.586 ], 00:12:14.586 "product_name": "Raid Volume", 00:12:14.586 "block_size": 512, 00:12:14.586 "num_blocks": 253952, 00:12:14.586 "uuid": "88818123-bb75-4340-a8be-60b4ff517fe8", 00:12:14.586 "assigned_rate_limits": { 00:12:14.586 "rw_ios_per_sec": 0, 00:12:14.586 "rw_mbytes_per_sec": 0, 00:12:14.586 "r_mbytes_per_sec": 0, 00:12:14.586 "w_mbytes_per_sec": 0 00:12:14.586 }, 00:12:14.586 "claimed": false, 00:12:14.586 "zoned": false, 00:12:14.586 "supported_io_types": { 00:12:14.586 "read": true, 00:12:14.586 "write": true, 00:12:14.586 "unmap": true, 00:12:14.586 "flush": true, 00:12:14.586 "reset": true, 00:12:14.586 "nvme_admin": false, 00:12:14.586 "nvme_io": false, 00:12:14.586 "nvme_io_md": false, 00:12:14.586 "write_zeroes": true, 00:12:14.586 "zcopy": false, 00:12:14.586 "get_zone_info": false, 00:12:14.586 "zone_management": false, 00:12:14.586 "zone_append": false, 00:12:14.586 "compare": false, 00:12:14.586 "compare_and_write": false, 00:12:14.586 "abort": false, 00:12:14.586 "seek_hole": false, 00:12:14.586 "seek_data": false, 00:12:14.586 "copy": false, 00:12:14.586 "nvme_iov_md": false 00:12:14.586 }, 00:12:14.586 "memory_domains": [ 00:12:14.586 { 00:12:14.586 "dma_device_id": "system", 00:12:14.586 "dma_device_type": 1 00:12:14.586 }, 00:12:14.586 { 00:12:14.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.586 "dma_device_type": 2 00:12:14.586 }, 00:12:14.586 { 00:12:14.587 "dma_device_id": "system", 00:12:14.587 "dma_device_type": 1 00:12:14.587 }, 00:12:14.587 { 00:12:14.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.587 "dma_device_type": 2 00:12:14.587 }, 00:12:14.587 { 00:12:14.587 "dma_device_id": "system", 00:12:14.587 "dma_device_type": 1 00:12:14.587 }, 00:12:14.587 { 00:12:14.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.587 "dma_device_type": 2 00:12:14.587 }, 00:12:14.587 { 00:12:14.587 "dma_device_id": "system", 00:12:14.587 "dma_device_type": 1 00:12:14.587 }, 00:12:14.587 { 00:12:14.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.587 "dma_device_type": 2 00:12:14.587 } 00:12:14.587 ], 00:12:14.587 "driver_specific": { 00:12:14.587 "raid": { 00:12:14.587 "uuid": "88818123-bb75-4340-a8be-60b4ff517fe8", 00:12:14.587 "strip_size_kb": 64, 00:12:14.587 "state": "online", 00:12:14.587 "raid_level": "concat", 00:12:14.587 "superblock": true, 00:12:14.587 "num_base_bdevs": 4, 00:12:14.587 "num_base_bdevs_discovered": 4, 00:12:14.587 "num_base_bdevs_operational": 4, 00:12:14.587 "base_bdevs_list": [ 00:12:14.587 { 00:12:14.587 "name": "pt1", 00:12:14.587 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:14.587 "is_configured": true, 00:12:14.587 "data_offset": 2048, 00:12:14.587 "data_size": 63488 00:12:14.587 }, 00:12:14.587 { 00:12:14.587 "name": "pt2", 00:12:14.587 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:14.587 "is_configured": true, 00:12:14.587 "data_offset": 2048, 00:12:14.587 "data_size": 63488 00:12:14.587 }, 00:12:14.587 { 00:12:14.587 "name": "pt3", 00:12:14.587 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:14.587 "is_configured": true, 00:12:14.587 "data_offset": 2048, 00:12:14.587 "data_size": 63488 00:12:14.587 }, 00:12:14.587 { 00:12:14.587 "name": "pt4", 00:12:14.587 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:14.587 "is_configured": true, 00:12:14.587 "data_offset": 2048, 00:12:14.587 "data_size": 63488 00:12:14.587 } 00:12:14.587 ] 00:12:14.587 } 00:12:14.587 } 00:12:14.587 }' 00:12:14.587 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:14.587 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:14.587 pt2 00:12:14.587 pt3 00:12:14.587 pt4' 00:12:14.587 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.587 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:14.587 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.587 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:14.587 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.587 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.587 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.587 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.587 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.587 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.587 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.587 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:14.587 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.587 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.587 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.587 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.845 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.845 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.845 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.845 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:14.845 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.845 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.845 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.845 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.845 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.845 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.845 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.845 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:14.845 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.845 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.845 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.845 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.845 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.845 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.846 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:14.846 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:14.846 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.846 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.846 [2024-11-26 19:01:41.336365] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.846 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.846 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 88818123-bb75-4340-a8be-60b4ff517fe8 '!=' 88818123-bb75-4340-a8be-60b4ff517fe8 ']' 00:12:14.846 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:14.846 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:14.846 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:14.846 19:01:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73145 00:12:14.846 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73145 ']' 00:12:14.846 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73145 00:12:14.846 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:14.846 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.846 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73145 00:12:14.846 killing process with pid 73145 00:12:14.846 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.846 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.846 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73145' 00:12:14.846 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 73145 00:12:14.846 [2024-11-26 19:01:41.410566] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:14.846 19:01:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 73145 00:12:14.846 [2024-11-26 19:01:41.410688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.846 [2024-11-26 19:01:41.410830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:14.846 [2024-11-26 19:01:41.410844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:15.413 [2024-11-26 19:01:41.752659] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:16.349 19:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:16.349 00:12:16.349 real 0m6.124s 00:12:16.349 user 0m9.035s 00:12:16.349 sys 0m0.967s 00:12:16.349 ************************************ 00:12:16.349 END TEST raid_superblock_test 00:12:16.349 ************************************ 00:12:16.349 19:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.349 19:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.607 19:01:42 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:12:16.607 19:01:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:16.607 19:01:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.607 19:01:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:16.607 ************************************ 00:12:16.607 START TEST raid_read_error_test 00:12:16.607 ************************************ 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.evpmAr1HpL 00:12:16.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73410 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73410 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73410 ']' 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.607 19:01:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.607 [2024-11-26 19:01:43.139514] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:12:16.607 [2024-11-26 19:01:43.139709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73410 ] 00:12:16.865 [2024-11-26 19:01:43.328698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.123 [2024-11-26 19:01:43.489141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.380 [2024-11-26 19:01:43.744114] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.380 [2024-11-26 19:01:43.744189] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.638 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.638 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:17.638 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:17.638 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:17.638 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.638 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.638 BaseBdev1_malloc 00:12:17.638 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.638 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:17.638 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.638 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.638 true 00:12:17.638 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.638 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:17.638 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.638 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.638 [2024-11-26 19:01:44.258675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:17.638 [2024-11-26 19:01:44.258769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.638 [2024-11-26 19:01:44.258817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:17.638 [2024-11-26 19:01:44.258849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.898 [2024-11-26 19:01:44.262435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.898 [2024-11-26 19:01:44.262509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:17.898 BaseBdev1 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.898 BaseBdev2_malloc 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.898 true 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.898 [2024-11-26 19:01:44.331785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:17.898 [2024-11-26 19:01:44.331857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.898 [2024-11-26 19:01:44.331898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:17.898 [2024-11-26 19:01:44.331939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.898 [2024-11-26 19:01:44.335093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.898 [2024-11-26 19:01:44.335150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:17.898 BaseBdev2 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.898 BaseBdev3_malloc 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.898 true 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.898 [2024-11-26 19:01:44.417209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:17.898 [2024-11-26 19:01:44.417280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.898 [2024-11-26 19:01:44.417330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:17.898 [2024-11-26 19:01:44.417350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.898 [2024-11-26 19:01:44.420437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.898 [2024-11-26 19:01:44.420611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:17.898 BaseBdev3 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.898 BaseBdev4_malloc 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.898 true 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.898 [2024-11-26 19:01:44.484879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:17.898 [2024-11-26 19:01:44.484951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.898 [2024-11-26 19:01:44.484980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:17.898 [2024-11-26 19:01:44.485013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.898 [2024-11-26 19:01:44.488642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.898 [2024-11-26 19:01:44.488714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:17.898 BaseBdev4 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.898 [2024-11-26 19:01:44.497038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.898 [2024-11-26 19:01:44.500201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:17.898 [2024-11-26 19:01:44.500371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:17.898 [2024-11-26 19:01:44.500515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:17.898 [2024-11-26 19:01:44.500874] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:17.898 [2024-11-26 19:01:44.500900] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:17.898 [2024-11-26 19:01:44.501254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:17.898 [2024-11-26 19:01:44.501526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:17.898 [2024-11-26 19:01:44.501555] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:17.898 [2024-11-26 19:01:44.501845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.898 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.156 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.156 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.156 "name": "raid_bdev1", 00:12:18.156 "uuid": "90d4489e-9d12-4954-9bef-1e72052696c2", 00:12:18.156 "strip_size_kb": 64, 00:12:18.156 "state": "online", 00:12:18.156 "raid_level": "concat", 00:12:18.156 "superblock": true, 00:12:18.156 "num_base_bdevs": 4, 00:12:18.156 "num_base_bdevs_discovered": 4, 00:12:18.156 "num_base_bdevs_operational": 4, 00:12:18.156 "base_bdevs_list": [ 00:12:18.156 { 00:12:18.156 "name": "BaseBdev1", 00:12:18.156 "uuid": "74ee0613-05af-50f8-bf15-9e2cf9e84ff4", 00:12:18.156 "is_configured": true, 00:12:18.156 "data_offset": 2048, 00:12:18.156 "data_size": 63488 00:12:18.156 }, 00:12:18.156 { 00:12:18.156 "name": "BaseBdev2", 00:12:18.156 "uuid": "e9926bca-698b-547f-b343-1e359aa853e2", 00:12:18.156 "is_configured": true, 00:12:18.156 "data_offset": 2048, 00:12:18.156 "data_size": 63488 00:12:18.156 }, 00:12:18.156 { 00:12:18.156 "name": "BaseBdev3", 00:12:18.156 "uuid": "fdeb3d09-81a7-5982-8b78-9fd9ca872bfb", 00:12:18.156 "is_configured": true, 00:12:18.156 "data_offset": 2048, 00:12:18.156 "data_size": 63488 00:12:18.156 }, 00:12:18.156 { 00:12:18.156 "name": "BaseBdev4", 00:12:18.156 "uuid": "5b651f80-54ee-59a2-8107-8a6ca3d47042", 00:12:18.156 "is_configured": true, 00:12:18.156 "data_offset": 2048, 00:12:18.156 "data_size": 63488 00:12:18.156 } 00:12:18.156 ] 00:12:18.156 }' 00:12:18.156 19:01:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.156 19:01:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.414 19:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:18.414 19:01:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:18.672 [2024-11-26 19:01:45.142795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.608 "name": "raid_bdev1", 00:12:19.608 "uuid": "90d4489e-9d12-4954-9bef-1e72052696c2", 00:12:19.608 "strip_size_kb": 64, 00:12:19.608 "state": "online", 00:12:19.608 "raid_level": "concat", 00:12:19.608 "superblock": true, 00:12:19.608 "num_base_bdevs": 4, 00:12:19.608 "num_base_bdevs_discovered": 4, 00:12:19.608 "num_base_bdevs_operational": 4, 00:12:19.608 "base_bdevs_list": [ 00:12:19.608 { 00:12:19.608 "name": "BaseBdev1", 00:12:19.608 "uuid": "74ee0613-05af-50f8-bf15-9e2cf9e84ff4", 00:12:19.608 "is_configured": true, 00:12:19.608 "data_offset": 2048, 00:12:19.608 "data_size": 63488 00:12:19.608 }, 00:12:19.608 { 00:12:19.608 "name": "BaseBdev2", 00:12:19.608 "uuid": "e9926bca-698b-547f-b343-1e359aa853e2", 00:12:19.608 "is_configured": true, 00:12:19.608 "data_offset": 2048, 00:12:19.608 "data_size": 63488 00:12:19.608 }, 00:12:19.608 { 00:12:19.608 "name": "BaseBdev3", 00:12:19.608 "uuid": "fdeb3d09-81a7-5982-8b78-9fd9ca872bfb", 00:12:19.608 "is_configured": true, 00:12:19.608 "data_offset": 2048, 00:12:19.608 "data_size": 63488 00:12:19.608 }, 00:12:19.608 { 00:12:19.608 "name": "BaseBdev4", 00:12:19.608 "uuid": "5b651f80-54ee-59a2-8107-8a6ca3d47042", 00:12:19.608 "is_configured": true, 00:12:19.608 "data_offset": 2048, 00:12:19.608 "data_size": 63488 00:12:19.608 } 00:12:19.608 ] 00:12:19.608 }' 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.608 19:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.175 19:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:20.175 19:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.175 19:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.175 [2024-11-26 19:01:46.557298] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:20.175 [2024-11-26 19:01:46.557492] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:20.175 [2024-11-26 19:01:46.561152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:20.175 [2024-11-26 19:01:46.561441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.175 [2024-11-26 19:01:46.561669] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:20.175 { 00:12:20.175 "results": [ 00:12:20.175 { 00:12:20.175 "job": "raid_bdev1", 00:12:20.175 "core_mask": "0x1", 00:12:20.175 "workload": "randrw", 00:12:20.175 "percentage": 50, 00:12:20.175 "status": "finished", 00:12:20.175 "queue_depth": 1, 00:12:20.175 "io_size": 131072, 00:12:20.175 "runtime": 1.41221, 00:12:20.175 "iops": 9213.218997174641, 00:12:20.175 "mibps": 1151.6523746468301, 00:12:20.175 "io_failed": 1, 00:12:20.175 "io_timeout": 0, 00:12:20.175 "avg_latency_us": 152.28461336388787, 00:12:20.175 "min_latency_us": 41.42545454545454, 00:12:20.175 "max_latency_us": 1839.4763636363637 00:12:20.175 } 00:12:20.175 ], 00:12:20.175 "core_count": 1 00:12:20.175 } 00:12:20.175 [2024-11-26 19:01:46.561844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:20.175 19:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.175 19:01:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73410 00:12:20.175 19:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73410 ']' 00:12:20.175 19:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73410 00:12:20.175 19:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:20.175 19:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.175 19:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73410 00:12:20.175 killing process with pid 73410 00:12:20.175 19:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:20.175 19:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:20.175 19:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73410' 00:12:20.175 19:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73410 00:12:20.175 [2024-11-26 19:01:46.600089] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:20.175 19:01:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73410 00:12:20.433 [2024-11-26 19:01:46.935843] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:21.808 19:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.evpmAr1HpL 00:12:21.808 19:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:21.808 19:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:21.808 19:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:21.808 19:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:21.808 19:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:21.808 19:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:21.808 19:01:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:21.808 00:12:21.808 real 0m5.225s 00:12:21.808 user 0m6.298s 00:12:21.808 sys 0m0.733s 00:12:21.808 ************************************ 00:12:21.808 END TEST raid_read_error_test 00:12:21.808 ************************************ 00:12:21.808 19:01:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.808 19:01:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.808 19:01:48 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:12:21.808 19:01:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:21.808 19:01:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.808 19:01:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:21.808 ************************************ 00:12:21.808 START TEST raid_write_error_test 00:12:21.808 ************************************ 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:21.808 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:21.809 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pGoys4srwz 00:12:21.809 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73561 00:12:21.809 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:21.809 19:01:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73561 00:12:21.809 19:01:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73561 ']' 00:12:21.809 19:01:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.809 19:01:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.809 19:01:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.809 19:01:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.809 19:01:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.809 [2024-11-26 19:01:48.414626] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:12:21.809 [2024-11-26 19:01:48.415070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73561 ] 00:12:22.067 [2024-11-26 19:01:48.607733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.326 [2024-11-26 19:01:48.788475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.584 [2024-11-26 19:01:49.033541] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.585 [2024-11-26 19:01:49.033639] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.842 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.842 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:22.842 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:22.842 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:22.842 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.842 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.100 BaseBdev1_malloc 00:12:23.100 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.100 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:23.100 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.101 true 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.101 [2024-11-26 19:01:49.514332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:23.101 [2024-11-26 19:01:49.514553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.101 [2024-11-26 19:01:49.514595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:23.101 [2024-11-26 19:01:49.514616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.101 [2024-11-26 19:01:49.517787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.101 [2024-11-26 19:01:49.517960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:23.101 BaseBdev1 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.101 BaseBdev2_malloc 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.101 true 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.101 [2024-11-26 19:01:49.583427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:23.101 [2024-11-26 19:01:49.583502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.101 [2024-11-26 19:01:49.583528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:23.101 [2024-11-26 19:01:49.583546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.101 [2024-11-26 19:01:49.586841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.101 [2024-11-26 19:01:49.586906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:23.101 BaseBdev2 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.101 BaseBdev3_malloc 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.101 true 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.101 [2024-11-26 19:01:49.656176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:23.101 [2024-11-26 19:01:49.656244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.101 [2024-11-26 19:01:49.656272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:23.101 [2024-11-26 19:01:49.656315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.101 [2024-11-26 19:01:49.659461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.101 [2024-11-26 19:01:49.659525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:23.101 BaseBdev3 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.101 BaseBdev4_malloc 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.101 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.360 true 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.360 [2024-11-26 19:01:49.724921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:23.360 [2024-11-26 19:01:49.725032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.360 [2024-11-26 19:01:49.725061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:23.360 [2024-11-26 19:01:49.725081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.360 [2024-11-26 19:01:49.728171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.360 [2024-11-26 19:01:49.728253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:23.360 BaseBdev4 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.360 [2024-11-26 19:01:49.733129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.360 [2024-11-26 19:01:49.735937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:23.360 [2024-11-26 19:01:49.736048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:23.360 [2024-11-26 19:01:49.736160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:23.360 [2024-11-26 19:01:49.736506] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:23.360 [2024-11-26 19:01:49.736528] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:23.360 [2024-11-26 19:01:49.736855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:23.360 [2024-11-26 19:01:49.737108] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:23.360 [2024-11-26 19:01:49.737127] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:23.360 [2024-11-26 19:01:49.737399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.360 "name": "raid_bdev1", 00:12:23.360 "uuid": "b04e2839-2588-46d3-8d1f-6ba5e04cec47", 00:12:23.360 "strip_size_kb": 64, 00:12:23.360 "state": "online", 00:12:23.360 "raid_level": "concat", 00:12:23.360 "superblock": true, 00:12:23.360 "num_base_bdevs": 4, 00:12:23.360 "num_base_bdevs_discovered": 4, 00:12:23.360 "num_base_bdevs_operational": 4, 00:12:23.360 "base_bdevs_list": [ 00:12:23.360 { 00:12:23.360 "name": "BaseBdev1", 00:12:23.360 "uuid": "6f9146c6-ec3c-5c34-ac32-86f5618d1200", 00:12:23.360 "is_configured": true, 00:12:23.360 "data_offset": 2048, 00:12:23.360 "data_size": 63488 00:12:23.360 }, 00:12:23.360 { 00:12:23.360 "name": "BaseBdev2", 00:12:23.360 "uuid": "39902878-d5a9-5131-8a27-88d7b875087b", 00:12:23.360 "is_configured": true, 00:12:23.360 "data_offset": 2048, 00:12:23.360 "data_size": 63488 00:12:23.360 }, 00:12:23.360 { 00:12:23.360 "name": "BaseBdev3", 00:12:23.360 "uuid": "c748b4b8-e6f5-56c1-aa33-a74845af94c4", 00:12:23.360 "is_configured": true, 00:12:23.360 "data_offset": 2048, 00:12:23.360 "data_size": 63488 00:12:23.360 }, 00:12:23.360 { 00:12:23.360 "name": "BaseBdev4", 00:12:23.360 "uuid": "b454e7fa-3876-50b4-99aa-c1e7127fe8cb", 00:12:23.360 "is_configured": true, 00:12:23.360 "data_offset": 2048, 00:12:23.360 "data_size": 63488 00:12:23.360 } 00:12:23.360 ] 00:12:23.360 }' 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.360 19:01:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.944 19:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:23.944 19:01:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:23.944 [2024-11-26 19:01:50.415304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:24.879 19:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.880 "name": "raid_bdev1", 00:12:24.880 "uuid": "b04e2839-2588-46d3-8d1f-6ba5e04cec47", 00:12:24.880 "strip_size_kb": 64, 00:12:24.880 "state": "online", 00:12:24.880 "raid_level": "concat", 00:12:24.880 "superblock": true, 00:12:24.880 "num_base_bdevs": 4, 00:12:24.880 "num_base_bdevs_discovered": 4, 00:12:24.880 "num_base_bdevs_operational": 4, 00:12:24.880 "base_bdevs_list": [ 00:12:24.880 { 00:12:24.880 "name": "BaseBdev1", 00:12:24.880 "uuid": "6f9146c6-ec3c-5c34-ac32-86f5618d1200", 00:12:24.880 "is_configured": true, 00:12:24.880 "data_offset": 2048, 00:12:24.880 "data_size": 63488 00:12:24.880 }, 00:12:24.880 { 00:12:24.880 "name": "BaseBdev2", 00:12:24.880 "uuid": "39902878-d5a9-5131-8a27-88d7b875087b", 00:12:24.880 "is_configured": true, 00:12:24.880 "data_offset": 2048, 00:12:24.880 "data_size": 63488 00:12:24.880 }, 00:12:24.880 { 00:12:24.880 "name": "BaseBdev3", 00:12:24.880 "uuid": "c748b4b8-e6f5-56c1-aa33-a74845af94c4", 00:12:24.880 "is_configured": true, 00:12:24.880 "data_offset": 2048, 00:12:24.880 "data_size": 63488 00:12:24.880 }, 00:12:24.880 { 00:12:24.880 "name": "BaseBdev4", 00:12:24.880 "uuid": "b454e7fa-3876-50b4-99aa-c1e7127fe8cb", 00:12:24.880 "is_configured": true, 00:12:24.880 "data_offset": 2048, 00:12:24.880 "data_size": 63488 00:12:24.880 } 00:12:24.880 ] 00:12:24.880 }' 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.880 19:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.446 19:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:25.446 19:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.446 19:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.446 [2024-11-26 19:01:51.838553] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:25.446 [2024-11-26 19:01:51.838595] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:25.446 [2024-11-26 19:01:51.842309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:25.446 [2024-11-26 19:01:51.842390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.446 [2024-11-26 19:01:51.842461] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:25.446 [2024-11-26 19:01:51.842481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:25.446 { 00:12:25.446 "results": [ 00:12:25.446 { 00:12:25.446 "job": "raid_bdev1", 00:12:25.446 "core_mask": "0x1", 00:12:25.446 "workload": "randrw", 00:12:25.446 "percentage": 50, 00:12:25.446 "status": "finished", 00:12:25.446 "queue_depth": 1, 00:12:25.446 "io_size": 131072, 00:12:25.446 "runtime": 1.420398, 00:12:25.446 "iops": 9104.4904315551, 00:12:25.446 "mibps": 1138.0613039443874, 00:12:25.446 "io_failed": 1, 00:12:25.446 "io_timeout": 0, 00:12:25.446 "avg_latency_us": 154.1660996886049, 00:12:25.446 "min_latency_us": 39.33090909090909, 00:12:25.446 "max_latency_us": 1951.1854545454546 00:12:25.446 } 00:12:25.446 ], 00:12:25.446 "core_count": 1 00:12:25.446 } 00:12:25.446 19:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.446 19:01:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73561 00:12:25.446 19:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73561 ']' 00:12:25.446 19:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73561 00:12:25.446 19:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:25.446 19:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:25.446 19:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73561 00:12:25.446 killing process with pid 73561 00:12:25.446 19:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:25.446 19:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:25.446 19:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73561' 00:12:25.446 19:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73561 00:12:25.446 [2024-11-26 19:01:51.879010] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:25.446 19:01:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73561 00:12:25.704 [2024-11-26 19:01:52.226682] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:27.078 19:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:27.078 19:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pGoys4srwz 00:12:27.078 19:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:27.078 19:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:12:27.078 19:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:27.078 19:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:27.078 19:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:27.078 19:01:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:12:27.078 00:12:27.078 real 0m5.213s 00:12:27.078 user 0m6.303s 00:12:27.078 sys 0m0.741s 00:12:27.078 19:01:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.078 19:01:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.078 ************************************ 00:12:27.078 END TEST raid_write_error_test 00:12:27.078 ************************************ 00:12:27.078 19:01:53 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:27.078 19:01:53 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:12:27.078 19:01:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:27.078 19:01:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.078 19:01:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:27.078 ************************************ 00:12:27.078 START TEST raid_state_function_test 00:12:27.079 ************************************ 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73710 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73710' 00:12:27.079 Process raid pid: 73710 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73710 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73710 ']' 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.079 19:01:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.079 [2024-11-26 19:01:53.672824] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:12:27.079 [2024-11-26 19:01:53.673020] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.337 [2024-11-26 19:01:53.859656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.595 [2024-11-26 19:01:54.020376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.854 [2024-11-26 19:01:54.266283] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.854 [2024-11-26 19:01:54.266398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.112 19:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:28.112 19:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:28.112 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:28.112 19:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.112 19:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.112 [2024-11-26 19:01:54.695965] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:28.112 [2024-11-26 19:01:54.696060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:28.112 [2024-11-26 19:01:54.696077] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:28.112 [2024-11-26 19:01:54.696094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:28.112 [2024-11-26 19:01:54.696104] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:28.112 [2024-11-26 19:01:54.696117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:28.112 [2024-11-26 19:01:54.696126] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:28.112 [2024-11-26 19:01:54.696155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:28.112 19:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.112 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:28.112 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.112 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.112 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.112 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.112 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.112 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.112 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.112 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.112 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.112 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.112 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.112 19:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.112 19:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.112 19:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.370 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.370 "name": "Existed_Raid", 00:12:28.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.370 "strip_size_kb": 0, 00:12:28.370 "state": "configuring", 00:12:28.370 "raid_level": "raid1", 00:12:28.370 "superblock": false, 00:12:28.370 "num_base_bdevs": 4, 00:12:28.370 "num_base_bdevs_discovered": 0, 00:12:28.370 "num_base_bdevs_operational": 4, 00:12:28.370 "base_bdevs_list": [ 00:12:28.370 { 00:12:28.370 "name": "BaseBdev1", 00:12:28.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.370 "is_configured": false, 00:12:28.370 "data_offset": 0, 00:12:28.370 "data_size": 0 00:12:28.370 }, 00:12:28.370 { 00:12:28.370 "name": "BaseBdev2", 00:12:28.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.370 "is_configured": false, 00:12:28.370 "data_offset": 0, 00:12:28.370 "data_size": 0 00:12:28.370 }, 00:12:28.370 { 00:12:28.370 "name": "BaseBdev3", 00:12:28.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.370 "is_configured": false, 00:12:28.370 "data_offset": 0, 00:12:28.370 "data_size": 0 00:12:28.370 }, 00:12:28.370 { 00:12:28.370 "name": "BaseBdev4", 00:12:28.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.370 "is_configured": false, 00:12:28.370 "data_offset": 0, 00:12:28.370 "data_size": 0 00:12:28.370 } 00:12:28.370 ] 00:12:28.370 }' 00:12:28.370 19:01:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.370 19:01:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.936 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:28.936 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.936 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.936 [2024-11-26 19:01:55.268290] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:28.936 [2024-11-26 19:01:55.268386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:28.936 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.936 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:28.936 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.936 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.936 [2024-11-26 19:01:55.280291] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:28.936 [2024-11-26 19:01:55.280360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:28.936 [2024-11-26 19:01:55.280378] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:28.936 [2024-11-26 19:01:55.280394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:28.936 [2024-11-26 19:01:55.280403] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:28.936 [2024-11-26 19:01:55.280418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:28.936 [2024-11-26 19:01:55.280427] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:28.936 [2024-11-26 19:01:55.280442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:28.936 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.936 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:28.936 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.936 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.936 [2024-11-26 19:01:55.329998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:28.936 BaseBdev1 00:12:28.936 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.937 [ 00:12:28.937 { 00:12:28.937 "name": "BaseBdev1", 00:12:28.937 "aliases": [ 00:12:28.937 "5bb65785-5939-4ae9-a8aa-604198931646" 00:12:28.937 ], 00:12:28.937 "product_name": "Malloc disk", 00:12:28.937 "block_size": 512, 00:12:28.937 "num_blocks": 65536, 00:12:28.937 "uuid": "5bb65785-5939-4ae9-a8aa-604198931646", 00:12:28.937 "assigned_rate_limits": { 00:12:28.937 "rw_ios_per_sec": 0, 00:12:28.937 "rw_mbytes_per_sec": 0, 00:12:28.937 "r_mbytes_per_sec": 0, 00:12:28.937 "w_mbytes_per_sec": 0 00:12:28.937 }, 00:12:28.937 "claimed": true, 00:12:28.937 "claim_type": "exclusive_write", 00:12:28.937 "zoned": false, 00:12:28.937 "supported_io_types": { 00:12:28.937 "read": true, 00:12:28.937 "write": true, 00:12:28.937 "unmap": true, 00:12:28.937 "flush": true, 00:12:28.937 "reset": true, 00:12:28.937 "nvme_admin": false, 00:12:28.937 "nvme_io": false, 00:12:28.937 "nvme_io_md": false, 00:12:28.937 "write_zeroes": true, 00:12:28.937 "zcopy": true, 00:12:28.937 "get_zone_info": false, 00:12:28.937 "zone_management": false, 00:12:28.937 "zone_append": false, 00:12:28.937 "compare": false, 00:12:28.937 "compare_and_write": false, 00:12:28.937 "abort": true, 00:12:28.937 "seek_hole": false, 00:12:28.937 "seek_data": false, 00:12:28.937 "copy": true, 00:12:28.937 "nvme_iov_md": false 00:12:28.937 }, 00:12:28.937 "memory_domains": [ 00:12:28.937 { 00:12:28.937 "dma_device_id": "system", 00:12:28.937 "dma_device_type": 1 00:12:28.937 }, 00:12:28.937 { 00:12:28.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.937 "dma_device_type": 2 00:12:28.937 } 00:12:28.937 ], 00:12:28.937 "driver_specific": {} 00:12:28.937 } 00:12:28.937 ] 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.937 "name": "Existed_Raid", 00:12:28.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.937 "strip_size_kb": 0, 00:12:28.937 "state": "configuring", 00:12:28.937 "raid_level": "raid1", 00:12:28.937 "superblock": false, 00:12:28.937 "num_base_bdevs": 4, 00:12:28.937 "num_base_bdevs_discovered": 1, 00:12:28.937 "num_base_bdevs_operational": 4, 00:12:28.937 "base_bdevs_list": [ 00:12:28.937 { 00:12:28.937 "name": "BaseBdev1", 00:12:28.937 "uuid": "5bb65785-5939-4ae9-a8aa-604198931646", 00:12:28.937 "is_configured": true, 00:12:28.937 "data_offset": 0, 00:12:28.937 "data_size": 65536 00:12:28.937 }, 00:12:28.937 { 00:12:28.937 "name": "BaseBdev2", 00:12:28.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.937 "is_configured": false, 00:12:28.937 "data_offset": 0, 00:12:28.937 "data_size": 0 00:12:28.937 }, 00:12:28.937 { 00:12:28.937 "name": "BaseBdev3", 00:12:28.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.937 "is_configured": false, 00:12:28.937 "data_offset": 0, 00:12:28.937 "data_size": 0 00:12:28.937 }, 00:12:28.937 { 00:12:28.937 "name": "BaseBdev4", 00:12:28.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.937 "is_configured": false, 00:12:28.937 "data_offset": 0, 00:12:28.937 "data_size": 0 00:12:28.937 } 00:12:28.937 ] 00:12:28.937 }' 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.937 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.504 [2024-11-26 19:01:55.874276] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:29.504 [2024-11-26 19:01:55.874392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.504 [2024-11-26 19:01:55.882332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:29.504 [2024-11-26 19:01:55.885220] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:29.504 [2024-11-26 19:01:55.885279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:29.504 [2024-11-26 19:01:55.885310] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:29.504 [2024-11-26 19:01:55.885329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:29.504 [2024-11-26 19:01:55.885339] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:29.504 [2024-11-26 19:01:55.885352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.504 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.504 "name": "Existed_Raid", 00:12:29.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.504 "strip_size_kb": 0, 00:12:29.504 "state": "configuring", 00:12:29.505 "raid_level": "raid1", 00:12:29.505 "superblock": false, 00:12:29.505 "num_base_bdevs": 4, 00:12:29.505 "num_base_bdevs_discovered": 1, 00:12:29.505 "num_base_bdevs_operational": 4, 00:12:29.505 "base_bdevs_list": [ 00:12:29.505 { 00:12:29.505 "name": "BaseBdev1", 00:12:29.505 "uuid": "5bb65785-5939-4ae9-a8aa-604198931646", 00:12:29.505 "is_configured": true, 00:12:29.505 "data_offset": 0, 00:12:29.505 "data_size": 65536 00:12:29.505 }, 00:12:29.505 { 00:12:29.505 "name": "BaseBdev2", 00:12:29.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.505 "is_configured": false, 00:12:29.505 "data_offset": 0, 00:12:29.505 "data_size": 0 00:12:29.505 }, 00:12:29.505 { 00:12:29.505 "name": "BaseBdev3", 00:12:29.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.505 "is_configured": false, 00:12:29.505 "data_offset": 0, 00:12:29.505 "data_size": 0 00:12:29.505 }, 00:12:29.505 { 00:12:29.505 "name": "BaseBdev4", 00:12:29.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.505 "is_configured": false, 00:12:29.505 "data_offset": 0, 00:12:29.505 "data_size": 0 00:12:29.505 } 00:12:29.505 ] 00:12:29.505 }' 00:12:29.505 19:01:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.505 19:01:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.071 [2024-11-26 19:01:56.467442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.071 BaseBdev2 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.071 [ 00:12:30.071 { 00:12:30.071 "name": "BaseBdev2", 00:12:30.071 "aliases": [ 00:12:30.071 "ad8c4f42-102a-4198-918f-8c924e7fece3" 00:12:30.071 ], 00:12:30.071 "product_name": "Malloc disk", 00:12:30.071 "block_size": 512, 00:12:30.071 "num_blocks": 65536, 00:12:30.071 "uuid": "ad8c4f42-102a-4198-918f-8c924e7fece3", 00:12:30.071 "assigned_rate_limits": { 00:12:30.071 "rw_ios_per_sec": 0, 00:12:30.071 "rw_mbytes_per_sec": 0, 00:12:30.071 "r_mbytes_per_sec": 0, 00:12:30.071 "w_mbytes_per_sec": 0 00:12:30.071 }, 00:12:30.071 "claimed": true, 00:12:30.071 "claim_type": "exclusive_write", 00:12:30.071 "zoned": false, 00:12:30.071 "supported_io_types": { 00:12:30.071 "read": true, 00:12:30.071 "write": true, 00:12:30.071 "unmap": true, 00:12:30.071 "flush": true, 00:12:30.071 "reset": true, 00:12:30.071 "nvme_admin": false, 00:12:30.071 "nvme_io": false, 00:12:30.071 "nvme_io_md": false, 00:12:30.071 "write_zeroes": true, 00:12:30.071 "zcopy": true, 00:12:30.071 "get_zone_info": false, 00:12:30.071 "zone_management": false, 00:12:30.071 "zone_append": false, 00:12:30.071 "compare": false, 00:12:30.071 "compare_and_write": false, 00:12:30.071 "abort": true, 00:12:30.071 "seek_hole": false, 00:12:30.071 "seek_data": false, 00:12:30.071 "copy": true, 00:12:30.071 "nvme_iov_md": false 00:12:30.071 }, 00:12:30.071 "memory_domains": [ 00:12:30.071 { 00:12:30.071 "dma_device_id": "system", 00:12:30.071 "dma_device_type": 1 00:12:30.071 }, 00:12:30.071 { 00:12:30.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.071 "dma_device_type": 2 00:12:30.071 } 00:12:30.071 ], 00:12:30.071 "driver_specific": {} 00:12:30.071 } 00:12:30.071 ] 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.071 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.071 "name": "Existed_Raid", 00:12:30.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.071 "strip_size_kb": 0, 00:12:30.071 "state": "configuring", 00:12:30.071 "raid_level": "raid1", 00:12:30.071 "superblock": false, 00:12:30.071 "num_base_bdevs": 4, 00:12:30.071 "num_base_bdevs_discovered": 2, 00:12:30.071 "num_base_bdevs_operational": 4, 00:12:30.071 "base_bdevs_list": [ 00:12:30.071 { 00:12:30.071 "name": "BaseBdev1", 00:12:30.071 "uuid": "5bb65785-5939-4ae9-a8aa-604198931646", 00:12:30.071 "is_configured": true, 00:12:30.071 "data_offset": 0, 00:12:30.071 "data_size": 65536 00:12:30.071 }, 00:12:30.071 { 00:12:30.071 "name": "BaseBdev2", 00:12:30.071 "uuid": "ad8c4f42-102a-4198-918f-8c924e7fece3", 00:12:30.071 "is_configured": true, 00:12:30.071 "data_offset": 0, 00:12:30.071 "data_size": 65536 00:12:30.071 }, 00:12:30.071 { 00:12:30.071 "name": "BaseBdev3", 00:12:30.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.072 "is_configured": false, 00:12:30.072 "data_offset": 0, 00:12:30.072 "data_size": 0 00:12:30.072 }, 00:12:30.072 { 00:12:30.072 "name": "BaseBdev4", 00:12:30.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.072 "is_configured": false, 00:12:30.072 "data_offset": 0, 00:12:30.072 "data_size": 0 00:12:30.072 } 00:12:30.072 ] 00:12:30.072 }' 00:12:30.072 19:01:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.072 19:01:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.639 [2024-11-26 19:01:57.077247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:30.639 BaseBdev3 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.639 [ 00:12:30.639 { 00:12:30.639 "name": "BaseBdev3", 00:12:30.639 "aliases": [ 00:12:30.639 "28bf680f-10e8-4bf7-8f33-da3e7ee7a844" 00:12:30.639 ], 00:12:30.639 "product_name": "Malloc disk", 00:12:30.639 "block_size": 512, 00:12:30.639 "num_blocks": 65536, 00:12:30.639 "uuid": "28bf680f-10e8-4bf7-8f33-da3e7ee7a844", 00:12:30.639 "assigned_rate_limits": { 00:12:30.639 "rw_ios_per_sec": 0, 00:12:30.639 "rw_mbytes_per_sec": 0, 00:12:30.639 "r_mbytes_per_sec": 0, 00:12:30.639 "w_mbytes_per_sec": 0 00:12:30.639 }, 00:12:30.639 "claimed": true, 00:12:30.639 "claim_type": "exclusive_write", 00:12:30.639 "zoned": false, 00:12:30.639 "supported_io_types": { 00:12:30.639 "read": true, 00:12:30.639 "write": true, 00:12:30.639 "unmap": true, 00:12:30.639 "flush": true, 00:12:30.639 "reset": true, 00:12:30.639 "nvme_admin": false, 00:12:30.639 "nvme_io": false, 00:12:30.639 "nvme_io_md": false, 00:12:30.639 "write_zeroes": true, 00:12:30.639 "zcopy": true, 00:12:30.639 "get_zone_info": false, 00:12:30.639 "zone_management": false, 00:12:30.639 "zone_append": false, 00:12:30.639 "compare": false, 00:12:30.639 "compare_and_write": false, 00:12:30.639 "abort": true, 00:12:30.639 "seek_hole": false, 00:12:30.639 "seek_data": false, 00:12:30.639 "copy": true, 00:12:30.639 "nvme_iov_md": false 00:12:30.639 }, 00:12:30.639 "memory_domains": [ 00:12:30.639 { 00:12:30.639 "dma_device_id": "system", 00:12:30.639 "dma_device_type": 1 00:12:30.639 }, 00:12:30.639 { 00:12:30.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.639 "dma_device_type": 2 00:12:30.639 } 00:12:30.639 ], 00:12:30.639 "driver_specific": {} 00:12:30.639 } 00:12:30.639 ] 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.639 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.639 "name": "Existed_Raid", 00:12:30.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.639 "strip_size_kb": 0, 00:12:30.639 "state": "configuring", 00:12:30.639 "raid_level": "raid1", 00:12:30.639 "superblock": false, 00:12:30.640 "num_base_bdevs": 4, 00:12:30.640 "num_base_bdevs_discovered": 3, 00:12:30.640 "num_base_bdevs_operational": 4, 00:12:30.640 "base_bdevs_list": [ 00:12:30.640 { 00:12:30.640 "name": "BaseBdev1", 00:12:30.640 "uuid": "5bb65785-5939-4ae9-a8aa-604198931646", 00:12:30.640 "is_configured": true, 00:12:30.640 "data_offset": 0, 00:12:30.640 "data_size": 65536 00:12:30.640 }, 00:12:30.640 { 00:12:30.640 "name": "BaseBdev2", 00:12:30.640 "uuid": "ad8c4f42-102a-4198-918f-8c924e7fece3", 00:12:30.640 "is_configured": true, 00:12:30.640 "data_offset": 0, 00:12:30.640 "data_size": 65536 00:12:30.640 }, 00:12:30.640 { 00:12:30.640 "name": "BaseBdev3", 00:12:30.640 "uuid": "28bf680f-10e8-4bf7-8f33-da3e7ee7a844", 00:12:30.640 "is_configured": true, 00:12:30.640 "data_offset": 0, 00:12:30.640 "data_size": 65536 00:12:30.640 }, 00:12:30.640 { 00:12:30.640 "name": "BaseBdev4", 00:12:30.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.640 "is_configured": false, 00:12:30.640 "data_offset": 0, 00:12:30.640 "data_size": 0 00:12:30.640 } 00:12:30.640 ] 00:12:30.640 }' 00:12:30.640 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.640 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.206 [2024-11-26 19:01:57.663275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:31.206 [2024-11-26 19:01:57.663359] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:31.206 [2024-11-26 19:01:57.663373] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:31.206 [2024-11-26 19:01:57.663741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:31.206 [2024-11-26 19:01:57.664000] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:31.206 [2024-11-26 19:01:57.664032] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:31.206 [2024-11-26 19:01:57.664377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.206 BaseBdev4 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.206 [ 00:12:31.206 { 00:12:31.206 "name": "BaseBdev4", 00:12:31.206 "aliases": [ 00:12:31.206 "ede4b15c-88cc-4eea-bc54-b0775cb78269" 00:12:31.206 ], 00:12:31.206 "product_name": "Malloc disk", 00:12:31.206 "block_size": 512, 00:12:31.206 "num_blocks": 65536, 00:12:31.206 "uuid": "ede4b15c-88cc-4eea-bc54-b0775cb78269", 00:12:31.206 "assigned_rate_limits": { 00:12:31.206 "rw_ios_per_sec": 0, 00:12:31.206 "rw_mbytes_per_sec": 0, 00:12:31.206 "r_mbytes_per_sec": 0, 00:12:31.206 "w_mbytes_per_sec": 0 00:12:31.206 }, 00:12:31.206 "claimed": true, 00:12:31.206 "claim_type": "exclusive_write", 00:12:31.206 "zoned": false, 00:12:31.206 "supported_io_types": { 00:12:31.206 "read": true, 00:12:31.206 "write": true, 00:12:31.206 "unmap": true, 00:12:31.206 "flush": true, 00:12:31.206 "reset": true, 00:12:31.206 "nvme_admin": false, 00:12:31.206 "nvme_io": false, 00:12:31.206 "nvme_io_md": false, 00:12:31.206 "write_zeroes": true, 00:12:31.206 "zcopy": true, 00:12:31.206 "get_zone_info": false, 00:12:31.206 "zone_management": false, 00:12:31.206 "zone_append": false, 00:12:31.206 "compare": false, 00:12:31.206 "compare_and_write": false, 00:12:31.206 "abort": true, 00:12:31.206 "seek_hole": false, 00:12:31.206 "seek_data": false, 00:12:31.206 "copy": true, 00:12:31.206 "nvme_iov_md": false 00:12:31.206 }, 00:12:31.206 "memory_domains": [ 00:12:31.206 { 00:12:31.206 "dma_device_id": "system", 00:12:31.206 "dma_device_type": 1 00:12:31.206 }, 00:12:31.206 { 00:12:31.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.206 "dma_device_type": 2 00:12:31.206 } 00:12:31.206 ], 00:12:31.206 "driver_specific": {} 00:12:31.206 } 00:12:31.206 ] 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.206 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.206 "name": "Existed_Raid", 00:12:31.207 "uuid": "564fc25d-0a5c-4fdb-8113-b701711923b0", 00:12:31.207 "strip_size_kb": 0, 00:12:31.207 "state": "online", 00:12:31.207 "raid_level": "raid1", 00:12:31.207 "superblock": false, 00:12:31.207 "num_base_bdevs": 4, 00:12:31.207 "num_base_bdevs_discovered": 4, 00:12:31.207 "num_base_bdevs_operational": 4, 00:12:31.207 "base_bdevs_list": [ 00:12:31.207 { 00:12:31.207 "name": "BaseBdev1", 00:12:31.207 "uuid": "5bb65785-5939-4ae9-a8aa-604198931646", 00:12:31.207 "is_configured": true, 00:12:31.207 "data_offset": 0, 00:12:31.207 "data_size": 65536 00:12:31.207 }, 00:12:31.207 { 00:12:31.207 "name": "BaseBdev2", 00:12:31.207 "uuid": "ad8c4f42-102a-4198-918f-8c924e7fece3", 00:12:31.207 "is_configured": true, 00:12:31.207 "data_offset": 0, 00:12:31.207 "data_size": 65536 00:12:31.207 }, 00:12:31.207 { 00:12:31.207 "name": "BaseBdev3", 00:12:31.207 "uuid": "28bf680f-10e8-4bf7-8f33-da3e7ee7a844", 00:12:31.207 "is_configured": true, 00:12:31.207 "data_offset": 0, 00:12:31.207 "data_size": 65536 00:12:31.207 }, 00:12:31.207 { 00:12:31.207 "name": "BaseBdev4", 00:12:31.207 "uuid": "ede4b15c-88cc-4eea-bc54-b0775cb78269", 00:12:31.207 "is_configured": true, 00:12:31.207 "data_offset": 0, 00:12:31.207 "data_size": 65536 00:12:31.207 } 00:12:31.207 ] 00:12:31.207 }' 00:12:31.207 19:01:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.207 19:01:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.773 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:31.773 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:31.773 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:31.773 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:31.773 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:31.773 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:31.773 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:31.773 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:31.773 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.774 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.774 [2024-11-26 19:01:58.240074] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:31.774 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.774 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:31.774 "name": "Existed_Raid", 00:12:31.774 "aliases": [ 00:12:31.774 "564fc25d-0a5c-4fdb-8113-b701711923b0" 00:12:31.774 ], 00:12:31.774 "product_name": "Raid Volume", 00:12:31.774 "block_size": 512, 00:12:31.774 "num_blocks": 65536, 00:12:31.774 "uuid": "564fc25d-0a5c-4fdb-8113-b701711923b0", 00:12:31.774 "assigned_rate_limits": { 00:12:31.774 "rw_ios_per_sec": 0, 00:12:31.774 "rw_mbytes_per_sec": 0, 00:12:31.774 "r_mbytes_per_sec": 0, 00:12:31.774 "w_mbytes_per_sec": 0 00:12:31.774 }, 00:12:31.774 "claimed": false, 00:12:31.774 "zoned": false, 00:12:31.774 "supported_io_types": { 00:12:31.774 "read": true, 00:12:31.774 "write": true, 00:12:31.774 "unmap": false, 00:12:31.774 "flush": false, 00:12:31.774 "reset": true, 00:12:31.774 "nvme_admin": false, 00:12:31.774 "nvme_io": false, 00:12:31.774 "nvme_io_md": false, 00:12:31.774 "write_zeroes": true, 00:12:31.774 "zcopy": false, 00:12:31.774 "get_zone_info": false, 00:12:31.774 "zone_management": false, 00:12:31.774 "zone_append": false, 00:12:31.774 "compare": false, 00:12:31.774 "compare_and_write": false, 00:12:31.774 "abort": false, 00:12:31.774 "seek_hole": false, 00:12:31.774 "seek_data": false, 00:12:31.774 "copy": false, 00:12:31.774 "nvme_iov_md": false 00:12:31.774 }, 00:12:31.774 "memory_domains": [ 00:12:31.774 { 00:12:31.774 "dma_device_id": "system", 00:12:31.774 "dma_device_type": 1 00:12:31.774 }, 00:12:31.774 { 00:12:31.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.774 "dma_device_type": 2 00:12:31.774 }, 00:12:31.774 { 00:12:31.774 "dma_device_id": "system", 00:12:31.774 "dma_device_type": 1 00:12:31.774 }, 00:12:31.774 { 00:12:31.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.774 "dma_device_type": 2 00:12:31.774 }, 00:12:31.774 { 00:12:31.774 "dma_device_id": "system", 00:12:31.774 "dma_device_type": 1 00:12:31.774 }, 00:12:31.774 { 00:12:31.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.774 "dma_device_type": 2 00:12:31.774 }, 00:12:31.774 { 00:12:31.774 "dma_device_id": "system", 00:12:31.774 "dma_device_type": 1 00:12:31.774 }, 00:12:31.774 { 00:12:31.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.774 "dma_device_type": 2 00:12:31.774 } 00:12:31.774 ], 00:12:31.774 "driver_specific": { 00:12:31.774 "raid": { 00:12:31.774 "uuid": "564fc25d-0a5c-4fdb-8113-b701711923b0", 00:12:31.774 "strip_size_kb": 0, 00:12:31.774 "state": "online", 00:12:31.774 "raid_level": "raid1", 00:12:31.774 "superblock": false, 00:12:31.774 "num_base_bdevs": 4, 00:12:31.774 "num_base_bdevs_discovered": 4, 00:12:31.774 "num_base_bdevs_operational": 4, 00:12:31.774 "base_bdevs_list": [ 00:12:31.774 { 00:12:31.774 "name": "BaseBdev1", 00:12:31.774 "uuid": "5bb65785-5939-4ae9-a8aa-604198931646", 00:12:31.774 "is_configured": true, 00:12:31.774 "data_offset": 0, 00:12:31.774 "data_size": 65536 00:12:31.774 }, 00:12:31.774 { 00:12:31.774 "name": "BaseBdev2", 00:12:31.774 "uuid": "ad8c4f42-102a-4198-918f-8c924e7fece3", 00:12:31.774 "is_configured": true, 00:12:31.774 "data_offset": 0, 00:12:31.774 "data_size": 65536 00:12:31.774 }, 00:12:31.774 { 00:12:31.774 "name": "BaseBdev3", 00:12:31.774 "uuid": "28bf680f-10e8-4bf7-8f33-da3e7ee7a844", 00:12:31.774 "is_configured": true, 00:12:31.774 "data_offset": 0, 00:12:31.774 "data_size": 65536 00:12:31.774 }, 00:12:31.774 { 00:12:31.774 "name": "BaseBdev4", 00:12:31.774 "uuid": "ede4b15c-88cc-4eea-bc54-b0775cb78269", 00:12:31.774 "is_configured": true, 00:12:31.774 "data_offset": 0, 00:12:31.774 "data_size": 65536 00:12:31.774 } 00:12:31.774 ] 00:12:31.774 } 00:12:31.774 } 00:12:31.774 }' 00:12:31.774 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:31.774 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:31.774 BaseBdev2 00:12:31.774 BaseBdev3 00:12:31.774 BaseBdev4' 00:12:31.774 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:32.032 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:32.033 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.033 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.033 [2024-11-26 19:01:58.627758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:32.290 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.290 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:32.290 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:32.290 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:32.290 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:32.290 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:32.290 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:32.290 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.290 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.290 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.290 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.290 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:32.290 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.290 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.290 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.290 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.290 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.290 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.290 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.290 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.290 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.290 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.290 "name": "Existed_Raid", 00:12:32.290 "uuid": "564fc25d-0a5c-4fdb-8113-b701711923b0", 00:12:32.290 "strip_size_kb": 0, 00:12:32.290 "state": "online", 00:12:32.290 "raid_level": "raid1", 00:12:32.290 "superblock": false, 00:12:32.290 "num_base_bdevs": 4, 00:12:32.290 "num_base_bdevs_discovered": 3, 00:12:32.290 "num_base_bdevs_operational": 3, 00:12:32.290 "base_bdevs_list": [ 00:12:32.290 { 00:12:32.290 "name": null, 00:12:32.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.290 "is_configured": false, 00:12:32.290 "data_offset": 0, 00:12:32.290 "data_size": 65536 00:12:32.290 }, 00:12:32.290 { 00:12:32.290 "name": "BaseBdev2", 00:12:32.290 "uuid": "ad8c4f42-102a-4198-918f-8c924e7fece3", 00:12:32.290 "is_configured": true, 00:12:32.290 "data_offset": 0, 00:12:32.290 "data_size": 65536 00:12:32.290 }, 00:12:32.290 { 00:12:32.290 "name": "BaseBdev3", 00:12:32.290 "uuid": "28bf680f-10e8-4bf7-8f33-da3e7ee7a844", 00:12:32.290 "is_configured": true, 00:12:32.290 "data_offset": 0, 00:12:32.290 "data_size": 65536 00:12:32.290 }, 00:12:32.290 { 00:12:32.290 "name": "BaseBdev4", 00:12:32.290 "uuid": "ede4b15c-88cc-4eea-bc54-b0775cb78269", 00:12:32.290 "is_configured": true, 00:12:32.291 "data_offset": 0, 00:12:32.291 "data_size": 65536 00:12:32.291 } 00:12:32.291 ] 00:12:32.291 }' 00:12:32.291 19:01:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.291 19:01:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.855 [2024-11-26 19:01:59.309167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.855 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.855 [2024-11-26 19:01:59.471860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:33.112 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.112 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:33.112 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:33.112 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.112 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:33.112 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.112 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.112 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.112 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:33.112 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:33.112 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:33.112 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.112 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.112 [2024-11-26 19:01:59.623209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:33.112 [2024-11-26 19:01:59.623398] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:33.112 [2024-11-26 19:01:59.710375] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:33.112 [2024-11-26 19:01:59.710440] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:33.112 [2024-11-26 19:01:59.710461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:33.112 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.112 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:33.112 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:33.112 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.112 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:33.112 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.112 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.112 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.371 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:33.371 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:33.371 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:33.371 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:33.371 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.372 BaseBdev2 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.372 [ 00:12:33.372 { 00:12:33.372 "name": "BaseBdev2", 00:12:33.372 "aliases": [ 00:12:33.372 "9f02acba-164e-4a9c-86a4-1b131910edac" 00:12:33.372 ], 00:12:33.372 "product_name": "Malloc disk", 00:12:33.372 "block_size": 512, 00:12:33.372 "num_blocks": 65536, 00:12:33.372 "uuid": "9f02acba-164e-4a9c-86a4-1b131910edac", 00:12:33.372 "assigned_rate_limits": { 00:12:33.372 "rw_ios_per_sec": 0, 00:12:33.372 "rw_mbytes_per_sec": 0, 00:12:33.372 "r_mbytes_per_sec": 0, 00:12:33.372 "w_mbytes_per_sec": 0 00:12:33.372 }, 00:12:33.372 "claimed": false, 00:12:33.372 "zoned": false, 00:12:33.372 "supported_io_types": { 00:12:33.372 "read": true, 00:12:33.372 "write": true, 00:12:33.372 "unmap": true, 00:12:33.372 "flush": true, 00:12:33.372 "reset": true, 00:12:33.372 "nvme_admin": false, 00:12:33.372 "nvme_io": false, 00:12:33.372 "nvme_io_md": false, 00:12:33.372 "write_zeroes": true, 00:12:33.372 "zcopy": true, 00:12:33.372 "get_zone_info": false, 00:12:33.372 "zone_management": false, 00:12:33.372 "zone_append": false, 00:12:33.372 "compare": false, 00:12:33.372 "compare_and_write": false, 00:12:33.372 "abort": true, 00:12:33.372 "seek_hole": false, 00:12:33.372 "seek_data": false, 00:12:33.372 "copy": true, 00:12:33.372 "nvme_iov_md": false 00:12:33.372 }, 00:12:33.372 "memory_domains": [ 00:12:33.372 { 00:12:33.372 "dma_device_id": "system", 00:12:33.372 "dma_device_type": 1 00:12:33.372 }, 00:12:33.372 { 00:12:33.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.372 "dma_device_type": 2 00:12:33.372 } 00:12:33.372 ], 00:12:33.372 "driver_specific": {} 00:12:33.372 } 00:12:33.372 ] 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.372 BaseBdev3 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.372 [ 00:12:33.372 { 00:12:33.372 "name": "BaseBdev3", 00:12:33.372 "aliases": [ 00:12:33.372 "ef1a1371-c5d1-42ad-8373-f4ba5e1ce293" 00:12:33.372 ], 00:12:33.372 "product_name": "Malloc disk", 00:12:33.372 "block_size": 512, 00:12:33.372 "num_blocks": 65536, 00:12:33.372 "uuid": "ef1a1371-c5d1-42ad-8373-f4ba5e1ce293", 00:12:33.372 "assigned_rate_limits": { 00:12:33.372 "rw_ios_per_sec": 0, 00:12:33.372 "rw_mbytes_per_sec": 0, 00:12:33.372 "r_mbytes_per_sec": 0, 00:12:33.372 "w_mbytes_per_sec": 0 00:12:33.372 }, 00:12:33.372 "claimed": false, 00:12:33.372 "zoned": false, 00:12:33.372 "supported_io_types": { 00:12:33.372 "read": true, 00:12:33.372 "write": true, 00:12:33.372 "unmap": true, 00:12:33.372 "flush": true, 00:12:33.372 "reset": true, 00:12:33.372 "nvme_admin": false, 00:12:33.372 "nvme_io": false, 00:12:33.372 "nvme_io_md": false, 00:12:33.372 "write_zeroes": true, 00:12:33.372 "zcopy": true, 00:12:33.372 "get_zone_info": false, 00:12:33.372 "zone_management": false, 00:12:33.372 "zone_append": false, 00:12:33.372 "compare": false, 00:12:33.372 "compare_and_write": false, 00:12:33.372 "abort": true, 00:12:33.372 "seek_hole": false, 00:12:33.372 "seek_data": false, 00:12:33.372 "copy": true, 00:12:33.372 "nvme_iov_md": false 00:12:33.372 }, 00:12:33.372 "memory_domains": [ 00:12:33.372 { 00:12:33.372 "dma_device_id": "system", 00:12:33.372 "dma_device_type": 1 00:12:33.372 }, 00:12:33.372 { 00:12:33.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.372 "dma_device_type": 2 00:12:33.372 } 00:12:33.372 ], 00:12:33.372 "driver_specific": {} 00:12:33.372 } 00:12:33.372 ] 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:33.372 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:33.373 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.373 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.373 BaseBdev4 00:12:33.373 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.373 19:01:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:33.373 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:33.373 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:33.373 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:33.373 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:33.373 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:33.373 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:33.373 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.373 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.631 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.631 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:33.631 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.631 19:01:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.631 [ 00:12:33.631 { 00:12:33.631 "name": "BaseBdev4", 00:12:33.631 "aliases": [ 00:12:33.631 "80a09d1e-ae2c-4d9b-8a5e-0d1d5e63975f" 00:12:33.631 ], 00:12:33.631 "product_name": "Malloc disk", 00:12:33.631 "block_size": 512, 00:12:33.631 "num_blocks": 65536, 00:12:33.631 "uuid": "80a09d1e-ae2c-4d9b-8a5e-0d1d5e63975f", 00:12:33.631 "assigned_rate_limits": { 00:12:33.631 "rw_ios_per_sec": 0, 00:12:33.631 "rw_mbytes_per_sec": 0, 00:12:33.631 "r_mbytes_per_sec": 0, 00:12:33.631 "w_mbytes_per_sec": 0 00:12:33.631 }, 00:12:33.631 "claimed": false, 00:12:33.631 "zoned": false, 00:12:33.631 "supported_io_types": { 00:12:33.631 "read": true, 00:12:33.631 "write": true, 00:12:33.631 "unmap": true, 00:12:33.631 "flush": true, 00:12:33.631 "reset": true, 00:12:33.631 "nvme_admin": false, 00:12:33.631 "nvme_io": false, 00:12:33.631 "nvme_io_md": false, 00:12:33.631 "write_zeroes": true, 00:12:33.631 "zcopy": true, 00:12:33.631 "get_zone_info": false, 00:12:33.631 "zone_management": false, 00:12:33.631 "zone_append": false, 00:12:33.631 "compare": false, 00:12:33.631 "compare_and_write": false, 00:12:33.631 "abort": true, 00:12:33.631 "seek_hole": false, 00:12:33.631 "seek_data": false, 00:12:33.631 "copy": true, 00:12:33.631 "nvme_iov_md": false 00:12:33.631 }, 00:12:33.631 "memory_domains": [ 00:12:33.631 { 00:12:33.631 "dma_device_id": "system", 00:12:33.631 "dma_device_type": 1 00:12:33.631 }, 00:12:33.631 { 00:12:33.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.631 "dma_device_type": 2 00:12:33.631 } 00:12:33.631 ], 00:12:33.631 "driver_specific": {} 00:12:33.631 } 00:12:33.631 ] 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.631 [2024-11-26 19:02:00.023441] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:33.631 [2024-11-26 19:02:00.023533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:33.631 [2024-11-26 19:02:00.023562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.631 [2024-11-26 19:02:00.026212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:33.631 [2024-11-26 19:02:00.026312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.631 "name": "Existed_Raid", 00:12:33.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.631 "strip_size_kb": 0, 00:12:33.631 "state": "configuring", 00:12:33.631 "raid_level": "raid1", 00:12:33.631 "superblock": false, 00:12:33.631 "num_base_bdevs": 4, 00:12:33.631 "num_base_bdevs_discovered": 3, 00:12:33.631 "num_base_bdevs_operational": 4, 00:12:33.631 "base_bdevs_list": [ 00:12:33.631 { 00:12:33.631 "name": "BaseBdev1", 00:12:33.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.631 "is_configured": false, 00:12:33.631 "data_offset": 0, 00:12:33.631 "data_size": 0 00:12:33.631 }, 00:12:33.631 { 00:12:33.631 "name": "BaseBdev2", 00:12:33.631 "uuid": "9f02acba-164e-4a9c-86a4-1b131910edac", 00:12:33.631 "is_configured": true, 00:12:33.631 "data_offset": 0, 00:12:33.631 "data_size": 65536 00:12:33.631 }, 00:12:33.631 { 00:12:33.631 "name": "BaseBdev3", 00:12:33.631 "uuid": "ef1a1371-c5d1-42ad-8373-f4ba5e1ce293", 00:12:33.631 "is_configured": true, 00:12:33.631 "data_offset": 0, 00:12:33.631 "data_size": 65536 00:12:33.631 }, 00:12:33.631 { 00:12:33.631 "name": "BaseBdev4", 00:12:33.631 "uuid": "80a09d1e-ae2c-4d9b-8a5e-0d1d5e63975f", 00:12:33.631 "is_configured": true, 00:12:33.631 "data_offset": 0, 00:12:33.631 "data_size": 65536 00:12:33.631 } 00:12:33.631 ] 00:12:33.631 }' 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.631 19:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.197 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:34.197 19:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.197 19:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.197 [2024-11-26 19:02:00.567685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:34.197 19:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.198 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:34.198 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.198 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.198 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.198 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.198 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.198 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.198 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.198 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.198 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.198 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.198 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.198 19:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.198 19:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.198 19:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.198 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.198 "name": "Existed_Raid", 00:12:34.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.198 "strip_size_kb": 0, 00:12:34.198 "state": "configuring", 00:12:34.198 "raid_level": "raid1", 00:12:34.198 "superblock": false, 00:12:34.198 "num_base_bdevs": 4, 00:12:34.198 "num_base_bdevs_discovered": 2, 00:12:34.198 "num_base_bdevs_operational": 4, 00:12:34.198 "base_bdevs_list": [ 00:12:34.198 { 00:12:34.198 "name": "BaseBdev1", 00:12:34.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.198 "is_configured": false, 00:12:34.198 "data_offset": 0, 00:12:34.198 "data_size": 0 00:12:34.198 }, 00:12:34.198 { 00:12:34.198 "name": null, 00:12:34.198 "uuid": "9f02acba-164e-4a9c-86a4-1b131910edac", 00:12:34.198 "is_configured": false, 00:12:34.198 "data_offset": 0, 00:12:34.198 "data_size": 65536 00:12:34.198 }, 00:12:34.198 { 00:12:34.198 "name": "BaseBdev3", 00:12:34.198 "uuid": "ef1a1371-c5d1-42ad-8373-f4ba5e1ce293", 00:12:34.198 "is_configured": true, 00:12:34.198 "data_offset": 0, 00:12:34.198 "data_size": 65536 00:12:34.198 }, 00:12:34.198 { 00:12:34.198 "name": "BaseBdev4", 00:12:34.198 "uuid": "80a09d1e-ae2c-4d9b-8a5e-0d1d5e63975f", 00:12:34.198 "is_configured": true, 00:12:34.198 "data_offset": 0, 00:12:34.198 "data_size": 65536 00:12:34.198 } 00:12:34.198 ] 00:12:34.198 }' 00:12:34.198 19:02:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.198 19:02:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.763 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.764 [2024-11-26 19:02:01.206479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:34.764 BaseBdev1 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.764 [ 00:12:34.764 { 00:12:34.764 "name": "BaseBdev1", 00:12:34.764 "aliases": [ 00:12:34.764 "31851bf0-d397-4827-b4b7-144c141b143a" 00:12:34.764 ], 00:12:34.764 "product_name": "Malloc disk", 00:12:34.764 "block_size": 512, 00:12:34.764 "num_blocks": 65536, 00:12:34.764 "uuid": "31851bf0-d397-4827-b4b7-144c141b143a", 00:12:34.764 "assigned_rate_limits": { 00:12:34.764 "rw_ios_per_sec": 0, 00:12:34.764 "rw_mbytes_per_sec": 0, 00:12:34.764 "r_mbytes_per_sec": 0, 00:12:34.764 "w_mbytes_per_sec": 0 00:12:34.764 }, 00:12:34.764 "claimed": true, 00:12:34.764 "claim_type": "exclusive_write", 00:12:34.764 "zoned": false, 00:12:34.764 "supported_io_types": { 00:12:34.764 "read": true, 00:12:34.764 "write": true, 00:12:34.764 "unmap": true, 00:12:34.764 "flush": true, 00:12:34.764 "reset": true, 00:12:34.764 "nvme_admin": false, 00:12:34.764 "nvme_io": false, 00:12:34.764 "nvme_io_md": false, 00:12:34.764 "write_zeroes": true, 00:12:34.764 "zcopy": true, 00:12:34.764 "get_zone_info": false, 00:12:34.764 "zone_management": false, 00:12:34.764 "zone_append": false, 00:12:34.764 "compare": false, 00:12:34.764 "compare_and_write": false, 00:12:34.764 "abort": true, 00:12:34.764 "seek_hole": false, 00:12:34.764 "seek_data": false, 00:12:34.764 "copy": true, 00:12:34.764 "nvme_iov_md": false 00:12:34.764 }, 00:12:34.764 "memory_domains": [ 00:12:34.764 { 00:12:34.764 "dma_device_id": "system", 00:12:34.764 "dma_device_type": 1 00:12:34.764 }, 00:12:34.764 { 00:12:34.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.764 "dma_device_type": 2 00:12:34.764 } 00:12:34.764 ], 00:12:34.764 "driver_specific": {} 00:12:34.764 } 00:12:34.764 ] 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.764 "name": "Existed_Raid", 00:12:34.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.764 "strip_size_kb": 0, 00:12:34.764 "state": "configuring", 00:12:34.764 "raid_level": "raid1", 00:12:34.764 "superblock": false, 00:12:34.764 "num_base_bdevs": 4, 00:12:34.764 "num_base_bdevs_discovered": 3, 00:12:34.764 "num_base_bdevs_operational": 4, 00:12:34.764 "base_bdevs_list": [ 00:12:34.764 { 00:12:34.764 "name": "BaseBdev1", 00:12:34.764 "uuid": "31851bf0-d397-4827-b4b7-144c141b143a", 00:12:34.764 "is_configured": true, 00:12:34.764 "data_offset": 0, 00:12:34.764 "data_size": 65536 00:12:34.764 }, 00:12:34.764 { 00:12:34.764 "name": null, 00:12:34.764 "uuid": "9f02acba-164e-4a9c-86a4-1b131910edac", 00:12:34.764 "is_configured": false, 00:12:34.764 "data_offset": 0, 00:12:34.764 "data_size": 65536 00:12:34.764 }, 00:12:34.764 { 00:12:34.764 "name": "BaseBdev3", 00:12:34.764 "uuid": "ef1a1371-c5d1-42ad-8373-f4ba5e1ce293", 00:12:34.764 "is_configured": true, 00:12:34.764 "data_offset": 0, 00:12:34.764 "data_size": 65536 00:12:34.764 }, 00:12:34.764 { 00:12:34.764 "name": "BaseBdev4", 00:12:34.764 "uuid": "80a09d1e-ae2c-4d9b-8a5e-0d1d5e63975f", 00:12:34.764 "is_configured": true, 00:12:34.764 "data_offset": 0, 00:12:34.764 "data_size": 65536 00:12:34.764 } 00:12:34.764 ] 00:12:34.764 }' 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.764 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.331 [2024-11-26 19:02:01.842739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.331 "name": "Existed_Raid", 00:12:35.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.331 "strip_size_kb": 0, 00:12:35.331 "state": "configuring", 00:12:35.331 "raid_level": "raid1", 00:12:35.331 "superblock": false, 00:12:35.331 "num_base_bdevs": 4, 00:12:35.331 "num_base_bdevs_discovered": 2, 00:12:35.331 "num_base_bdevs_operational": 4, 00:12:35.331 "base_bdevs_list": [ 00:12:35.331 { 00:12:35.331 "name": "BaseBdev1", 00:12:35.331 "uuid": "31851bf0-d397-4827-b4b7-144c141b143a", 00:12:35.331 "is_configured": true, 00:12:35.331 "data_offset": 0, 00:12:35.331 "data_size": 65536 00:12:35.331 }, 00:12:35.331 { 00:12:35.331 "name": null, 00:12:35.331 "uuid": "9f02acba-164e-4a9c-86a4-1b131910edac", 00:12:35.331 "is_configured": false, 00:12:35.331 "data_offset": 0, 00:12:35.331 "data_size": 65536 00:12:35.331 }, 00:12:35.331 { 00:12:35.331 "name": null, 00:12:35.331 "uuid": "ef1a1371-c5d1-42ad-8373-f4ba5e1ce293", 00:12:35.331 "is_configured": false, 00:12:35.331 "data_offset": 0, 00:12:35.331 "data_size": 65536 00:12:35.331 }, 00:12:35.331 { 00:12:35.331 "name": "BaseBdev4", 00:12:35.331 "uuid": "80a09d1e-ae2c-4d9b-8a5e-0d1d5e63975f", 00:12:35.331 "is_configured": true, 00:12:35.331 "data_offset": 0, 00:12:35.331 "data_size": 65536 00:12:35.331 } 00:12:35.331 ] 00:12:35.331 }' 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.331 19:02:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.897 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.897 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:35.897 19:02:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.897 19:02:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.897 19:02:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.897 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:35.897 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:35.897 19:02:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.897 19:02:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.897 [2024-11-26 19:02:02.430880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:35.897 19:02:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.897 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:35.897 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.897 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.897 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.897 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.897 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.897 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.897 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.897 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.897 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.897 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.897 19:02:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.898 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.898 19:02:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.898 19:02:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.898 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.898 "name": "Existed_Raid", 00:12:35.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.898 "strip_size_kb": 0, 00:12:35.898 "state": "configuring", 00:12:35.898 "raid_level": "raid1", 00:12:35.898 "superblock": false, 00:12:35.898 "num_base_bdevs": 4, 00:12:35.898 "num_base_bdevs_discovered": 3, 00:12:35.898 "num_base_bdevs_operational": 4, 00:12:35.898 "base_bdevs_list": [ 00:12:35.898 { 00:12:35.898 "name": "BaseBdev1", 00:12:35.898 "uuid": "31851bf0-d397-4827-b4b7-144c141b143a", 00:12:35.898 "is_configured": true, 00:12:35.898 "data_offset": 0, 00:12:35.898 "data_size": 65536 00:12:35.898 }, 00:12:35.898 { 00:12:35.898 "name": null, 00:12:35.898 "uuid": "9f02acba-164e-4a9c-86a4-1b131910edac", 00:12:35.898 "is_configured": false, 00:12:35.898 "data_offset": 0, 00:12:35.898 "data_size": 65536 00:12:35.898 }, 00:12:35.898 { 00:12:35.898 "name": "BaseBdev3", 00:12:35.898 "uuid": "ef1a1371-c5d1-42ad-8373-f4ba5e1ce293", 00:12:35.898 "is_configured": true, 00:12:35.898 "data_offset": 0, 00:12:35.898 "data_size": 65536 00:12:35.898 }, 00:12:35.898 { 00:12:35.898 "name": "BaseBdev4", 00:12:35.898 "uuid": "80a09d1e-ae2c-4d9b-8a5e-0d1d5e63975f", 00:12:35.898 "is_configured": true, 00:12:35.898 "data_offset": 0, 00:12:35.898 "data_size": 65536 00:12:35.898 } 00:12:35.898 ] 00:12:35.898 }' 00:12:35.898 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.898 19:02:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.463 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.463 19:02:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:36.463 19:02:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.463 19:02:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.463 19:02:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.463 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:36.463 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:36.463 19:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.463 19:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.463 [2024-11-26 19:02:03.019115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:36.722 19:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.722 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:36.722 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.722 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.722 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.722 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.722 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.722 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.722 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.722 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.722 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.722 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.722 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.722 19:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.722 19:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.722 19:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.722 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.722 "name": "Existed_Raid", 00:12:36.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.722 "strip_size_kb": 0, 00:12:36.722 "state": "configuring", 00:12:36.722 "raid_level": "raid1", 00:12:36.722 "superblock": false, 00:12:36.722 "num_base_bdevs": 4, 00:12:36.722 "num_base_bdevs_discovered": 2, 00:12:36.722 "num_base_bdevs_operational": 4, 00:12:36.722 "base_bdevs_list": [ 00:12:36.722 { 00:12:36.722 "name": null, 00:12:36.722 "uuid": "31851bf0-d397-4827-b4b7-144c141b143a", 00:12:36.722 "is_configured": false, 00:12:36.722 "data_offset": 0, 00:12:36.722 "data_size": 65536 00:12:36.722 }, 00:12:36.722 { 00:12:36.722 "name": null, 00:12:36.722 "uuid": "9f02acba-164e-4a9c-86a4-1b131910edac", 00:12:36.722 "is_configured": false, 00:12:36.722 "data_offset": 0, 00:12:36.722 "data_size": 65536 00:12:36.722 }, 00:12:36.722 { 00:12:36.722 "name": "BaseBdev3", 00:12:36.722 "uuid": "ef1a1371-c5d1-42ad-8373-f4ba5e1ce293", 00:12:36.722 "is_configured": true, 00:12:36.722 "data_offset": 0, 00:12:36.722 "data_size": 65536 00:12:36.722 }, 00:12:36.722 { 00:12:36.722 "name": "BaseBdev4", 00:12:36.722 "uuid": "80a09d1e-ae2c-4d9b-8a5e-0d1d5e63975f", 00:12:36.722 "is_configured": true, 00:12:36.722 "data_offset": 0, 00:12:36.722 "data_size": 65536 00:12:36.722 } 00:12:36.722 ] 00:12:36.722 }' 00:12:36.722 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.722 19:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.290 [2024-11-26 19:02:03.696767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.290 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.290 "name": "Existed_Raid", 00:12:37.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.290 "strip_size_kb": 0, 00:12:37.290 "state": "configuring", 00:12:37.290 "raid_level": "raid1", 00:12:37.290 "superblock": false, 00:12:37.290 "num_base_bdevs": 4, 00:12:37.290 "num_base_bdevs_discovered": 3, 00:12:37.290 "num_base_bdevs_operational": 4, 00:12:37.290 "base_bdevs_list": [ 00:12:37.290 { 00:12:37.290 "name": null, 00:12:37.290 "uuid": "31851bf0-d397-4827-b4b7-144c141b143a", 00:12:37.290 "is_configured": false, 00:12:37.291 "data_offset": 0, 00:12:37.291 "data_size": 65536 00:12:37.291 }, 00:12:37.291 { 00:12:37.291 "name": "BaseBdev2", 00:12:37.291 "uuid": "9f02acba-164e-4a9c-86a4-1b131910edac", 00:12:37.291 "is_configured": true, 00:12:37.291 "data_offset": 0, 00:12:37.291 "data_size": 65536 00:12:37.291 }, 00:12:37.291 { 00:12:37.291 "name": "BaseBdev3", 00:12:37.291 "uuid": "ef1a1371-c5d1-42ad-8373-f4ba5e1ce293", 00:12:37.291 "is_configured": true, 00:12:37.291 "data_offset": 0, 00:12:37.291 "data_size": 65536 00:12:37.291 }, 00:12:37.291 { 00:12:37.291 "name": "BaseBdev4", 00:12:37.291 "uuid": "80a09d1e-ae2c-4d9b-8a5e-0d1d5e63975f", 00:12:37.291 "is_configured": true, 00:12:37.291 "data_offset": 0, 00:12:37.291 "data_size": 65536 00:12:37.291 } 00:12:37.291 ] 00:12:37.291 }' 00:12:37.291 19:02:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.291 19:02:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.858 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.858 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:37.858 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.858 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.858 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.858 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:37.858 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.858 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:37.858 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.858 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.858 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.858 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 31851bf0-d397-4827-b4b7-144c141b143a 00:12:37.858 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.858 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.859 [2024-11-26 19:02:04.383121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:37.859 [2024-11-26 19:02:04.383178] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:37.859 [2024-11-26 19:02:04.383194] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:37.859 [2024-11-26 19:02:04.383592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:37.859 [2024-11-26 19:02:04.383844] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:37.859 [2024-11-26 19:02:04.383867] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:37.859 [2024-11-26 19:02:04.384226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.859 NewBaseBdev 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.859 [ 00:12:37.859 { 00:12:37.859 "name": "NewBaseBdev", 00:12:37.859 "aliases": [ 00:12:37.859 "31851bf0-d397-4827-b4b7-144c141b143a" 00:12:37.859 ], 00:12:37.859 "product_name": "Malloc disk", 00:12:37.859 "block_size": 512, 00:12:37.859 "num_blocks": 65536, 00:12:37.859 "uuid": "31851bf0-d397-4827-b4b7-144c141b143a", 00:12:37.859 "assigned_rate_limits": { 00:12:37.859 "rw_ios_per_sec": 0, 00:12:37.859 "rw_mbytes_per_sec": 0, 00:12:37.859 "r_mbytes_per_sec": 0, 00:12:37.859 "w_mbytes_per_sec": 0 00:12:37.859 }, 00:12:37.859 "claimed": true, 00:12:37.859 "claim_type": "exclusive_write", 00:12:37.859 "zoned": false, 00:12:37.859 "supported_io_types": { 00:12:37.859 "read": true, 00:12:37.859 "write": true, 00:12:37.859 "unmap": true, 00:12:37.859 "flush": true, 00:12:37.859 "reset": true, 00:12:37.859 "nvme_admin": false, 00:12:37.859 "nvme_io": false, 00:12:37.859 "nvme_io_md": false, 00:12:37.859 "write_zeroes": true, 00:12:37.859 "zcopy": true, 00:12:37.859 "get_zone_info": false, 00:12:37.859 "zone_management": false, 00:12:37.859 "zone_append": false, 00:12:37.859 "compare": false, 00:12:37.859 "compare_and_write": false, 00:12:37.859 "abort": true, 00:12:37.859 "seek_hole": false, 00:12:37.859 "seek_data": false, 00:12:37.859 "copy": true, 00:12:37.859 "nvme_iov_md": false 00:12:37.859 }, 00:12:37.859 "memory_domains": [ 00:12:37.859 { 00:12:37.859 "dma_device_id": "system", 00:12:37.859 "dma_device_type": 1 00:12:37.859 }, 00:12:37.859 { 00:12:37.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.859 "dma_device_type": 2 00:12:37.859 } 00:12:37.859 ], 00:12:37.859 "driver_specific": {} 00:12:37.859 } 00:12:37.859 ] 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.859 "name": "Existed_Raid", 00:12:37.859 "uuid": "9f6741a4-8492-4a80-8309-abd55f4370dd", 00:12:37.859 "strip_size_kb": 0, 00:12:37.859 "state": "online", 00:12:37.859 "raid_level": "raid1", 00:12:37.859 "superblock": false, 00:12:37.859 "num_base_bdevs": 4, 00:12:37.859 "num_base_bdevs_discovered": 4, 00:12:37.859 "num_base_bdevs_operational": 4, 00:12:37.859 "base_bdevs_list": [ 00:12:37.859 { 00:12:37.859 "name": "NewBaseBdev", 00:12:37.859 "uuid": "31851bf0-d397-4827-b4b7-144c141b143a", 00:12:37.859 "is_configured": true, 00:12:37.859 "data_offset": 0, 00:12:37.859 "data_size": 65536 00:12:37.859 }, 00:12:37.859 { 00:12:37.859 "name": "BaseBdev2", 00:12:37.859 "uuid": "9f02acba-164e-4a9c-86a4-1b131910edac", 00:12:37.859 "is_configured": true, 00:12:37.859 "data_offset": 0, 00:12:37.859 "data_size": 65536 00:12:37.859 }, 00:12:37.859 { 00:12:37.859 "name": "BaseBdev3", 00:12:37.859 "uuid": "ef1a1371-c5d1-42ad-8373-f4ba5e1ce293", 00:12:37.859 "is_configured": true, 00:12:37.859 "data_offset": 0, 00:12:37.859 "data_size": 65536 00:12:37.859 }, 00:12:37.859 { 00:12:37.859 "name": "BaseBdev4", 00:12:37.859 "uuid": "80a09d1e-ae2c-4d9b-8a5e-0d1d5e63975f", 00:12:37.859 "is_configured": true, 00:12:37.859 "data_offset": 0, 00:12:37.859 "data_size": 65536 00:12:37.859 } 00:12:37.859 ] 00:12:37.859 }' 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.859 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.426 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:38.426 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:38.426 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:38.426 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:38.426 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:38.426 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:38.426 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:38.426 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.426 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.426 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:38.426 [2024-11-26 19:02:04.931761] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:38.426 19:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.426 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:38.426 "name": "Existed_Raid", 00:12:38.426 "aliases": [ 00:12:38.426 "9f6741a4-8492-4a80-8309-abd55f4370dd" 00:12:38.426 ], 00:12:38.426 "product_name": "Raid Volume", 00:12:38.426 "block_size": 512, 00:12:38.426 "num_blocks": 65536, 00:12:38.426 "uuid": "9f6741a4-8492-4a80-8309-abd55f4370dd", 00:12:38.426 "assigned_rate_limits": { 00:12:38.426 "rw_ios_per_sec": 0, 00:12:38.426 "rw_mbytes_per_sec": 0, 00:12:38.426 "r_mbytes_per_sec": 0, 00:12:38.426 "w_mbytes_per_sec": 0 00:12:38.426 }, 00:12:38.426 "claimed": false, 00:12:38.426 "zoned": false, 00:12:38.426 "supported_io_types": { 00:12:38.426 "read": true, 00:12:38.426 "write": true, 00:12:38.426 "unmap": false, 00:12:38.426 "flush": false, 00:12:38.426 "reset": true, 00:12:38.426 "nvme_admin": false, 00:12:38.426 "nvme_io": false, 00:12:38.426 "nvme_io_md": false, 00:12:38.426 "write_zeroes": true, 00:12:38.426 "zcopy": false, 00:12:38.426 "get_zone_info": false, 00:12:38.426 "zone_management": false, 00:12:38.426 "zone_append": false, 00:12:38.426 "compare": false, 00:12:38.426 "compare_and_write": false, 00:12:38.426 "abort": false, 00:12:38.426 "seek_hole": false, 00:12:38.426 "seek_data": false, 00:12:38.426 "copy": false, 00:12:38.426 "nvme_iov_md": false 00:12:38.426 }, 00:12:38.426 "memory_domains": [ 00:12:38.426 { 00:12:38.426 "dma_device_id": "system", 00:12:38.426 "dma_device_type": 1 00:12:38.426 }, 00:12:38.426 { 00:12:38.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.426 "dma_device_type": 2 00:12:38.426 }, 00:12:38.426 { 00:12:38.426 "dma_device_id": "system", 00:12:38.426 "dma_device_type": 1 00:12:38.426 }, 00:12:38.426 { 00:12:38.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.426 "dma_device_type": 2 00:12:38.426 }, 00:12:38.426 { 00:12:38.426 "dma_device_id": "system", 00:12:38.426 "dma_device_type": 1 00:12:38.426 }, 00:12:38.426 { 00:12:38.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.426 "dma_device_type": 2 00:12:38.426 }, 00:12:38.426 { 00:12:38.426 "dma_device_id": "system", 00:12:38.426 "dma_device_type": 1 00:12:38.426 }, 00:12:38.426 { 00:12:38.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.426 "dma_device_type": 2 00:12:38.426 } 00:12:38.426 ], 00:12:38.426 "driver_specific": { 00:12:38.426 "raid": { 00:12:38.426 "uuid": "9f6741a4-8492-4a80-8309-abd55f4370dd", 00:12:38.426 "strip_size_kb": 0, 00:12:38.426 "state": "online", 00:12:38.426 "raid_level": "raid1", 00:12:38.426 "superblock": false, 00:12:38.426 "num_base_bdevs": 4, 00:12:38.426 "num_base_bdevs_discovered": 4, 00:12:38.426 "num_base_bdevs_operational": 4, 00:12:38.426 "base_bdevs_list": [ 00:12:38.426 { 00:12:38.426 "name": "NewBaseBdev", 00:12:38.426 "uuid": "31851bf0-d397-4827-b4b7-144c141b143a", 00:12:38.426 "is_configured": true, 00:12:38.426 "data_offset": 0, 00:12:38.426 "data_size": 65536 00:12:38.426 }, 00:12:38.426 { 00:12:38.426 "name": "BaseBdev2", 00:12:38.426 "uuid": "9f02acba-164e-4a9c-86a4-1b131910edac", 00:12:38.426 "is_configured": true, 00:12:38.426 "data_offset": 0, 00:12:38.426 "data_size": 65536 00:12:38.426 }, 00:12:38.426 { 00:12:38.426 "name": "BaseBdev3", 00:12:38.426 "uuid": "ef1a1371-c5d1-42ad-8373-f4ba5e1ce293", 00:12:38.426 "is_configured": true, 00:12:38.426 "data_offset": 0, 00:12:38.426 "data_size": 65536 00:12:38.426 }, 00:12:38.426 { 00:12:38.426 "name": "BaseBdev4", 00:12:38.426 "uuid": "80a09d1e-ae2c-4d9b-8a5e-0d1d5e63975f", 00:12:38.426 "is_configured": true, 00:12:38.426 "data_offset": 0, 00:12:38.426 "data_size": 65536 00:12:38.426 } 00:12:38.426 ] 00:12:38.426 } 00:12:38.426 } 00:12:38.426 }' 00:12:38.426 19:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:38.426 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:38.426 BaseBdev2 00:12:38.426 BaseBdev3 00:12:38.426 BaseBdev4' 00:12:38.426 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.685 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.943 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.943 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.943 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:38.943 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.943 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.943 [2024-11-26 19:02:05.315382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:38.943 [2024-11-26 19:02:05.315534] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.943 [2024-11-26 19:02:05.315666] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.943 [2024-11-26 19:02:05.316136] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.943 [2024-11-26 19:02:05.316157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:38.943 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.943 19:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73710 00:12:38.943 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73710 ']' 00:12:38.943 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73710 00:12:38.943 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:38.943 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.943 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73710 00:12:38.943 killing process with pid 73710 00:12:38.944 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:38.944 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:38.944 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73710' 00:12:38.944 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73710 00:12:38.944 [2024-11-26 19:02:05.355886] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:38.944 19:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73710 00:12:39.201 [2024-11-26 19:02:05.715336] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:40.576 ************************************ 00:12:40.576 END TEST raid_state_function_test 00:12:40.576 ************************************ 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:40.576 00:12:40.576 real 0m13.264s 00:12:40.576 user 0m21.910s 00:12:40.576 sys 0m1.954s 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.576 19:02:06 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:40.576 19:02:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:40.576 19:02:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.576 19:02:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:40.576 ************************************ 00:12:40.576 START TEST raid_state_function_test_sb 00:12:40.576 ************************************ 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:40.576 Process raid pid: 74400 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74400 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74400' 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74400 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74400 ']' 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.576 19:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.576 [2024-11-26 19:02:06.996077] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:12:40.576 [2024-11-26 19:02:06.996266] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.576 [2024-11-26 19:02:07.182632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.834 [2024-11-26 19:02:07.333070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.092 [2024-11-26 19:02:07.557230] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.092 [2024-11-26 19:02:07.557626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.351 19:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.351 19:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:41.351 19:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:41.351 19:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.351 19:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.351 [2024-11-26 19:02:07.961506] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:41.351 [2024-11-26 19:02:07.961573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:41.351 [2024-11-26 19:02:07.961592] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:41.351 [2024-11-26 19:02:07.961609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:41.351 [2024-11-26 19:02:07.961634] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:41.351 [2024-11-26 19:02:07.961648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:41.351 [2024-11-26 19:02:07.961665] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:41.351 [2024-11-26 19:02:07.961680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:41.351 19:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.351 19:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:41.351 19:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.351 19:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.351 19:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.351 19:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.351 19:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.351 19:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.351 19:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.351 19:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.351 19:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.351 19:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.351 19:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.609 19:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.609 19:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.609 19:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.609 19:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.609 "name": "Existed_Raid", 00:12:41.609 "uuid": "7ce9bb0c-9a3d-4ddb-8ba3-6c7b76e6bb90", 00:12:41.609 "strip_size_kb": 0, 00:12:41.609 "state": "configuring", 00:12:41.609 "raid_level": "raid1", 00:12:41.609 "superblock": true, 00:12:41.609 "num_base_bdevs": 4, 00:12:41.609 "num_base_bdevs_discovered": 0, 00:12:41.609 "num_base_bdevs_operational": 4, 00:12:41.609 "base_bdevs_list": [ 00:12:41.609 { 00:12:41.609 "name": "BaseBdev1", 00:12:41.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.609 "is_configured": false, 00:12:41.609 "data_offset": 0, 00:12:41.609 "data_size": 0 00:12:41.609 }, 00:12:41.609 { 00:12:41.609 "name": "BaseBdev2", 00:12:41.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.609 "is_configured": false, 00:12:41.609 "data_offset": 0, 00:12:41.609 "data_size": 0 00:12:41.609 }, 00:12:41.609 { 00:12:41.609 "name": "BaseBdev3", 00:12:41.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.609 "is_configured": false, 00:12:41.609 "data_offset": 0, 00:12:41.609 "data_size": 0 00:12:41.609 }, 00:12:41.609 { 00:12:41.609 "name": "BaseBdev4", 00:12:41.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.609 "is_configured": false, 00:12:41.609 "data_offset": 0, 00:12:41.609 "data_size": 0 00:12:41.609 } 00:12:41.609 ] 00:12:41.609 }' 00:12:41.609 19:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.609 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.868 19:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:41.868 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.868 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.129 [2024-11-26 19:02:08.489680] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:42.129 [2024-11-26 19:02:08.489730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.129 [2024-11-26 19:02:08.497666] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:42.129 [2024-11-26 19:02:08.497730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:42.129 [2024-11-26 19:02:08.497745] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:42.129 [2024-11-26 19:02:08.497760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:42.129 [2024-11-26 19:02:08.497770] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:42.129 [2024-11-26 19:02:08.497784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:42.129 [2024-11-26 19:02:08.497793] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:42.129 [2024-11-26 19:02:08.497807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.129 [2024-11-26 19:02:08.546460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.129 BaseBdev1 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.129 [ 00:12:42.129 { 00:12:42.129 "name": "BaseBdev1", 00:12:42.129 "aliases": [ 00:12:42.129 "8366c078-65c9-4162-b306-15ea71b4ca3e" 00:12:42.129 ], 00:12:42.129 "product_name": "Malloc disk", 00:12:42.129 "block_size": 512, 00:12:42.129 "num_blocks": 65536, 00:12:42.129 "uuid": "8366c078-65c9-4162-b306-15ea71b4ca3e", 00:12:42.129 "assigned_rate_limits": { 00:12:42.129 "rw_ios_per_sec": 0, 00:12:42.129 "rw_mbytes_per_sec": 0, 00:12:42.129 "r_mbytes_per_sec": 0, 00:12:42.129 "w_mbytes_per_sec": 0 00:12:42.129 }, 00:12:42.129 "claimed": true, 00:12:42.129 "claim_type": "exclusive_write", 00:12:42.129 "zoned": false, 00:12:42.129 "supported_io_types": { 00:12:42.129 "read": true, 00:12:42.129 "write": true, 00:12:42.129 "unmap": true, 00:12:42.129 "flush": true, 00:12:42.129 "reset": true, 00:12:42.129 "nvme_admin": false, 00:12:42.129 "nvme_io": false, 00:12:42.129 "nvme_io_md": false, 00:12:42.129 "write_zeroes": true, 00:12:42.129 "zcopy": true, 00:12:42.129 "get_zone_info": false, 00:12:42.129 "zone_management": false, 00:12:42.129 "zone_append": false, 00:12:42.129 "compare": false, 00:12:42.129 "compare_and_write": false, 00:12:42.129 "abort": true, 00:12:42.129 "seek_hole": false, 00:12:42.129 "seek_data": false, 00:12:42.129 "copy": true, 00:12:42.129 "nvme_iov_md": false 00:12:42.129 }, 00:12:42.129 "memory_domains": [ 00:12:42.129 { 00:12:42.129 "dma_device_id": "system", 00:12:42.129 "dma_device_type": 1 00:12:42.129 }, 00:12:42.129 { 00:12:42.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.129 "dma_device_type": 2 00:12:42.129 } 00:12:42.129 ], 00:12:42.129 "driver_specific": {} 00:12:42.129 } 00:12:42.129 ] 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.129 "name": "Existed_Raid", 00:12:42.129 "uuid": "8ae51a13-10b5-40c0-918d-67aa4f1dc488", 00:12:42.129 "strip_size_kb": 0, 00:12:42.129 "state": "configuring", 00:12:42.129 "raid_level": "raid1", 00:12:42.129 "superblock": true, 00:12:42.129 "num_base_bdevs": 4, 00:12:42.129 "num_base_bdevs_discovered": 1, 00:12:42.129 "num_base_bdevs_operational": 4, 00:12:42.129 "base_bdevs_list": [ 00:12:42.129 { 00:12:42.129 "name": "BaseBdev1", 00:12:42.129 "uuid": "8366c078-65c9-4162-b306-15ea71b4ca3e", 00:12:42.129 "is_configured": true, 00:12:42.129 "data_offset": 2048, 00:12:42.129 "data_size": 63488 00:12:42.129 }, 00:12:42.129 { 00:12:42.129 "name": "BaseBdev2", 00:12:42.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.129 "is_configured": false, 00:12:42.129 "data_offset": 0, 00:12:42.129 "data_size": 0 00:12:42.129 }, 00:12:42.129 { 00:12:42.129 "name": "BaseBdev3", 00:12:42.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.129 "is_configured": false, 00:12:42.129 "data_offset": 0, 00:12:42.129 "data_size": 0 00:12:42.129 }, 00:12:42.129 { 00:12:42.129 "name": "BaseBdev4", 00:12:42.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.129 "is_configured": false, 00:12:42.129 "data_offset": 0, 00:12:42.129 "data_size": 0 00:12:42.129 } 00:12:42.129 ] 00:12:42.129 }' 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.129 19:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.695 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:42.695 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.695 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.695 [2024-11-26 19:02:09.118728] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:42.695 [2024-11-26 19:02:09.118795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:42.695 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.695 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:42.695 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.695 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.695 [2024-11-26 19:02:09.130739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.695 [2024-11-26 19:02:09.133367] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:42.695 [2024-11-26 19:02:09.133420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:42.695 [2024-11-26 19:02:09.133437] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:42.695 [2024-11-26 19:02:09.133455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:42.695 [2024-11-26 19:02:09.133465] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:42.695 [2024-11-26 19:02:09.133479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:42.695 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.696 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:42.696 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:42.696 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:42.696 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.696 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.696 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.696 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.696 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:42.696 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.696 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.696 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.696 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.696 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.696 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.696 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.696 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.696 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.696 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.696 "name": "Existed_Raid", 00:12:42.696 "uuid": "9ca14b9f-60eb-44f6-8867-962440becdb8", 00:12:42.696 "strip_size_kb": 0, 00:12:42.696 "state": "configuring", 00:12:42.696 "raid_level": "raid1", 00:12:42.696 "superblock": true, 00:12:42.696 "num_base_bdevs": 4, 00:12:42.696 "num_base_bdevs_discovered": 1, 00:12:42.696 "num_base_bdevs_operational": 4, 00:12:42.696 "base_bdevs_list": [ 00:12:42.696 { 00:12:42.696 "name": "BaseBdev1", 00:12:42.696 "uuid": "8366c078-65c9-4162-b306-15ea71b4ca3e", 00:12:42.696 "is_configured": true, 00:12:42.696 "data_offset": 2048, 00:12:42.696 "data_size": 63488 00:12:42.696 }, 00:12:42.696 { 00:12:42.696 "name": "BaseBdev2", 00:12:42.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.696 "is_configured": false, 00:12:42.696 "data_offset": 0, 00:12:42.696 "data_size": 0 00:12:42.696 }, 00:12:42.696 { 00:12:42.696 "name": "BaseBdev3", 00:12:42.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.696 "is_configured": false, 00:12:42.696 "data_offset": 0, 00:12:42.696 "data_size": 0 00:12:42.696 }, 00:12:42.696 { 00:12:42.696 "name": "BaseBdev4", 00:12:42.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.696 "is_configured": false, 00:12:42.696 "data_offset": 0, 00:12:42.696 "data_size": 0 00:12:42.696 } 00:12:42.696 ] 00:12:42.696 }' 00:12:42.696 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.696 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.264 [2024-11-26 19:02:09.682120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:43.264 BaseBdev2 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.264 [ 00:12:43.264 { 00:12:43.264 "name": "BaseBdev2", 00:12:43.264 "aliases": [ 00:12:43.264 "f34f6808-fca6-4494-84b2-52bf6f3a4b18" 00:12:43.264 ], 00:12:43.264 "product_name": "Malloc disk", 00:12:43.264 "block_size": 512, 00:12:43.264 "num_blocks": 65536, 00:12:43.264 "uuid": "f34f6808-fca6-4494-84b2-52bf6f3a4b18", 00:12:43.264 "assigned_rate_limits": { 00:12:43.264 "rw_ios_per_sec": 0, 00:12:43.264 "rw_mbytes_per_sec": 0, 00:12:43.264 "r_mbytes_per_sec": 0, 00:12:43.264 "w_mbytes_per_sec": 0 00:12:43.264 }, 00:12:43.264 "claimed": true, 00:12:43.264 "claim_type": "exclusive_write", 00:12:43.264 "zoned": false, 00:12:43.264 "supported_io_types": { 00:12:43.264 "read": true, 00:12:43.264 "write": true, 00:12:43.264 "unmap": true, 00:12:43.264 "flush": true, 00:12:43.264 "reset": true, 00:12:43.264 "nvme_admin": false, 00:12:43.264 "nvme_io": false, 00:12:43.264 "nvme_io_md": false, 00:12:43.264 "write_zeroes": true, 00:12:43.264 "zcopy": true, 00:12:43.264 "get_zone_info": false, 00:12:43.264 "zone_management": false, 00:12:43.264 "zone_append": false, 00:12:43.264 "compare": false, 00:12:43.264 "compare_and_write": false, 00:12:43.264 "abort": true, 00:12:43.264 "seek_hole": false, 00:12:43.264 "seek_data": false, 00:12:43.264 "copy": true, 00:12:43.264 "nvme_iov_md": false 00:12:43.264 }, 00:12:43.264 "memory_domains": [ 00:12:43.264 { 00:12:43.264 "dma_device_id": "system", 00:12:43.264 "dma_device_type": 1 00:12:43.264 }, 00:12:43.264 { 00:12:43.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.264 "dma_device_type": 2 00:12:43.264 } 00:12:43.264 ], 00:12:43.264 "driver_specific": {} 00:12:43.264 } 00:12:43.264 ] 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.264 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.264 "name": "Existed_Raid", 00:12:43.264 "uuid": "9ca14b9f-60eb-44f6-8867-962440becdb8", 00:12:43.264 "strip_size_kb": 0, 00:12:43.264 "state": "configuring", 00:12:43.264 "raid_level": "raid1", 00:12:43.264 "superblock": true, 00:12:43.264 "num_base_bdevs": 4, 00:12:43.264 "num_base_bdevs_discovered": 2, 00:12:43.264 "num_base_bdevs_operational": 4, 00:12:43.264 "base_bdevs_list": [ 00:12:43.264 { 00:12:43.264 "name": "BaseBdev1", 00:12:43.264 "uuid": "8366c078-65c9-4162-b306-15ea71b4ca3e", 00:12:43.264 "is_configured": true, 00:12:43.264 "data_offset": 2048, 00:12:43.264 "data_size": 63488 00:12:43.264 }, 00:12:43.264 { 00:12:43.264 "name": "BaseBdev2", 00:12:43.264 "uuid": "f34f6808-fca6-4494-84b2-52bf6f3a4b18", 00:12:43.264 "is_configured": true, 00:12:43.264 "data_offset": 2048, 00:12:43.264 "data_size": 63488 00:12:43.264 }, 00:12:43.264 { 00:12:43.264 "name": "BaseBdev3", 00:12:43.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.264 "is_configured": false, 00:12:43.264 "data_offset": 0, 00:12:43.264 "data_size": 0 00:12:43.264 }, 00:12:43.264 { 00:12:43.264 "name": "BaseBdev4", 00:12:43.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.264 "is_configured": false, 00:12:43.264 "data_offset": 0, 00:12:43.264 "data_size": 0 00:12:43.265 } 00:12:43.265 ] 00:12:43.265 }' 00:12:43.265 19:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.265 19:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.831 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:43.831 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.831 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.831 [2024-11-26 19:02:10.307194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:43.831 BaseBdev3 00:12:43.831 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.831 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:43.831 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:43.831 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:43.831 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:43.831 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:43.831 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:43.831 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:43.831 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.831 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.831 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.831 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:43.831 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.831 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.831 [ 00:12:43.831 { 00:12:43.831 "name": "BaseBdev3", 00:12:43.831 "aliases": [ 00:12:43.831 "74a0a4e4-edb6-417a-bc55-e9f906bf2a68" 00:12:43.831 ], 00:12:43.831 "product_name": "Malloc disk", 00:12:43.831 "block_size": 512, 00:12:43.831 "num_blocks": 65536, 00:12:43.831 "uuid": "74a0a4e4-edb6-417a-bc55-e9f906bf2a68", 00:12:43.831 "assigned_rate_limits": { 00:12:43.831 "rw_ios_per_sec": 0, 00:12:43.831 "rw_mbytes_per_sec": 0, 00:12:43.831 "r_mbytes_per_sec": 0, 00:12:43.831 "w_mbytes_per_sec": 0 00:12:43.831 }, 00:12:43.831 "claimed": true, 00:12:43.831 "claim_type": "exclusive_write", 00:12:43.832 "zoned": false, 00:12:43.832 "supported_io_types": { 00:12:43.832 "read": true, 00:12:43.832 "write": true, 00:12:43.832 "unmap": true, 00:12:43.832 "flush": true, 00:12:43.832 "reset": true, 00:12:43.832 "nvme_admin": false, 00:12:43.832 "nvme_io": false, 00:12:43.832 "nvme_io_md": false, 00:12:43.832 "write_zeroes": true, 00:12:43.832 "zcopy": true, 00:12:43.832 "get_zone_info": false, 00:12:43.832 "zone_management": false, 00:12:43.832 "zone_append": false, 00:12:43.832 "compare": false, 00:12:43.832 "compare_and_write": false, 00:12:43.832 "abort": true, 00:12:43.832 "seek_hole": false, 00:12:43.832 "seek_data": false, 00:12:43.832 "copy": true, 00:12:43.832 "nvme_iov_md": false 00:12:43.832 }, 00:12:43.832 "memory_domains": [ 00:12:43.832 { 00:12:43.832 "dma_device_id": "system", 00:12:43.832 "dma_device_type": 1 00:12:43.832 }, 00:12:43.832 { 00:12:43.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.832 "dma_device_type": 2 00:12:43.832 } 00:12:43.832 ], 00:12:43.832 "driver_specific": {} 00:12:43.832 } 00:12:43.832 ] 00:12:43.832 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.832 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:43.832 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:43.832 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:43.832 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:43.832 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.832 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.832 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.832 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.832 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.832 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.832 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.832 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.832 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.832 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.832 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.832 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.832 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.832 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.832 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.832 "name": "Existed_Raid", 00:12:43.832 "uuid": "9ca14b9f-60eb-44f6-8867-962440becdb8", 00:12:43.832 "strip_size_kb": 0, 00:12:43.832 "state": "configuring", 00:12:43.832 "raid_level": "raid1", 00:12:43.832 "superblock": true, 00:12:43.832 "num_base_bdevs": 4, 00:12:43.832 "num_base_bdevs_discovered": 3, 00:12:43.832 "num_base_bdevs_operational": 4, 00:12:43.832 "base_bdevs_list": [ 00:12:43.832 { 00:12:43.832 "name": "BaseBdev1", 00:12:43.832 "uuid": "8366c078-65c9-4162-b306-15ea71b4ca3e", 00:12:43.832 "is_configured": true, 00:12:43.832 "data_offset": 2048, 00:12:43.832 "data_size": 63488 00:12:43.832 }, 00:12:43.832 { 00:12:43.832 "name": "BaseBdev2", 00:12:43.832 "uuid": "f34f6808-fca6-4494-84b2-52bf6f3a4b18", 00:12:43.832 "is_configured": true, 00:12:43.832 "data_offset": 2048, 00:12:43.832 "data_size": 63488 00:12:43.832 }, 00:12:43.832 { 00:12:43.832 "name": "BaseBdev3", 00:12:43.832 "uuid": "74a0a4e4-edb6-417a-bc55-e9f906bf2a68", 00:12:43.832 "is_configured": true, 00:12:43.832 "data_offset": 2048, 00:12:43.832 "data_size": 63488 00:12:43.832 }, 00:12:43.832 { 00:12:43.832 "name": "BaseBdev4", 00:12:43.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.832 "is_configured": false, 00:12:43.832 "data_offset": 0, 00:12:43.832 "data_size": 0 00:12:43.832 } 00:12:43.832 ] 00:12:43.832 }' 00:12:43.832 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.832 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.399 [2024-11-26 19:02:10.936212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:44.399 [2024-11-26 19:02:10.936645] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:44.399 [2024-11-26 19:02:10.936666] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:44.399 BaseBdev4 00:12:44.399 [2024-11-26 19:02:10.937032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:44.399 [2024-11-26 19:02:10.937265] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:44.399 [2024-11-26 19:02:10.937303] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:44.399 [2024-11-26 19:02:10.937515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.399 [ 00:12:44.399 { 00:12:44.399 "name": "BaseBdev4", 00:12:44.399 "aliases": [ 00:12:44.399 "b79e5b60-2759-4cf3-9765-3dd4c5ec2976" 00:12:44.399 ], 00:12:44.399 "product_name": "Malloc disk", 00:12:44.399 "block_size": 512, 00:12:44.399 "num_blocks": 65536, 00:12:44.399 "uuid": "b79e5b60-2759-4cf3-9765-3dd4c5ec2976", 00:12:44.399 "assigned_rate_limits": { 00:12:44.399 "rw_ios_per_sec": 0, 00:12:44.399 "rw_mbytes_per_sec": 0, 00:12:44.399 "r_mbytes_per_sec": 0, 00:12:44.399 "w_mbytes_per_sec": 0 00:12:44.399 }, 00:12:44.399 "claimed": true, 00:12:44.399 "claim_type": "exclusive_write", 00:12:44.399 "zoned": false, 00:12:44.399 "supported_io_types": { 00:12:44.399 "read": true, 00:12:44.399 "write": true, 00:12:44.399 "unmap": true, 00:12:44.399 "flush": true, 00:12:44.399 "reset": true, 00:12:44.399 "nvme_admin": false, 00:12:44.399 "nvme_io": false, 00:12:44.399 "nvme_io_md": false, 00:12:44.399 "write_zeroes": true, 00:12:44.399 "zcopy": true, 00:12:44.399 "get_zone_info": false, 00:12:44.399 "zone_management": false, 00:12:44.399 "zone_append": false, 00:12:44.399 "compare": false, 00:12:44.399 "compare_and_write": false, 00:12:44.399 "abort": true, 00:12:44.399 "seek_hole": false, 00:12:44.399 "seek_data": false, 00:12:44.399 "copy": true, 00:12:44.399 "nvme_iov_md": false 00:12:44.399 }, 00:12:44.399 "memory_domains": [ 00:12:44.399 { 00:12:44.399 "dma_device_id": "system", 00:12:44.399 "dma_device_type": 1 00:12:44.399 }, 00:12:44.399 { 00:12:44.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.399 "dma_device_type": 2 00:12:44.399 } 00:12:44.399 ], 00:12:44.399 "driver_specific": {} 00:12:44.399 } 00:12:44.399 ] 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.399 19:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.658 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.658 "name": "Existed_Raid", 00:12:44.658 "uuid": "9ca14b9f-60eb-44f6-8867-962440becdb8", 00:12:44.658 "strip_size_kb": 0, 00:12:44.658 "state": "online", 00:12:44.658 "raid_level": "raid1", 00:12:44.658 "superblock": true, 00:12:44.658 "num_base_bdevs": 4, 00:12:44.658 "num_base_bdevs_discovered": 4, 00:12:44.658 "num_base_bdevs_operational": 4, 00:12:44.658 "base_bdevs_list": [ 00:12:44.658 { 00:12:44.658 "name": "BaseBdev1", 00:12:44.658 "uuid": "8366c078-65c9-4162-b306-15ea71b4ca3e", 00:12:44.658 "is_configured": true, 00:12:44.658 "data_offset": 2048, 00:12:44.658 "data_size": 63488 00:12:44.658 }, 00:12:44.658 { 00:12:44.658 "name": "BaseBdev2", 00:12:44.658 "uuid": "f34f6808-fca6-4494-84b2-52bf6f3a4b18", 00:12:44.658 "is_configured": true, 00:12:44.658 "data_offset": 2048, 00:12:44.658 "data_size": 63488 00:12:44.658 }, 00:12:44.658 { 00:12:44.658 "name": "BaseBdev3", 00:12:44.658 "uuid": "74a0a4e4-edb6-417a-bc55-e9f906bf2a68", 00:12:44.658 "is_configured": true, 00:12:44.658 "data_offset": 2048, 00:12:44.658 "data_size": 63488 00:12:44.658 }, 00:12:44.658 { 00:12:44.658 "name": "BaseBdev4", 00:12:44.658 "uuid": "b79e5b60-2759-4cf3-9765-3dd4c5ec2976", 00:12:44.658 "is_configured": true, 00:12:44.658 "data_offset": 2048, 00:12:44.658 "data_size": 63488 00:12:44.658 } 00:12:44.658 ] 00:12:44.658 }' 00:12:44.658 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.658 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.917 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:44.917 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:44.917 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:44.917 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:44.917 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:44.917 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:44.917 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:44.917 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:44.917 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.917 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.917 [2024-11-26 19:02:11.480889] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.917 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.917 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:44.917 "name": "Existed_Raid", 00:12:44.917 "aliases": [ 00:12:44.917 "9ca14b9f-60eb-44f6-8867-962440becdb8" 00:12:44.917 ], 00:12:44.917 "product_name": "Raid Volume", 00:12:44.917 "block_size": 512, 00:12:44.917 "num_blocks": 63488, 00:12:44.917 "uuid": "9ca14b9f-60eb-44f6-8867-962440becdb8", 00:12:44.917 "assigned_rate_limits": { 00:12:44.917 "rw_ios_per_sec": 0, 00:12:44.917 "rw_mbytes_per_sec": 0, 00:12:44.917 "r_mbytes_per_sec": 0, 00:12:44.917 "w_mbytes_per_sec": 0 00:12:44.917 }, 00:12:44.917 "claimed": false, 00:12:44.917 "zoned": false, 00:12:44.917 "supported_io_types": { 00:12:44.917 "read": true, 00:12:44.917 "write": true, 00:12:44.917 "unmap": false, 00:12:44.917 "flush": false, 00:12:44.917 "reset": true, 00:12:44.917 "nvme_admin": false, 00:12:44.917 "nvme_io": false, 00:12:44.917 "nvme_io_md": false, 00:12:44.917 "write_zeroes": true, 00:12:44.917 "zcopy": false, 00:12:44.917 "get_zone_info": false, 00:12:44.917 "zone_management": false, 00:12:44.917 "zone_append": false, 00:12:44.917 "compare": false, 00:12:44.917 "compare_and_write": false, 00:12:44.917 "abort": false, 00:12:44.917 "seek_hole": false, 00:12:44.917 "seek_data": false, 00:12:44.917 "copy": false, 00:12:44.917 "nvme_iov_md": false 00:12:44.917 }, 00:12:44.917 "memory_domains": [ 00:12:44.917 { 00:12:44.917 "dma_device_id": "system", 00:12:44.917 "dma_device_type": 1 00:12:44.917 }, 00:12:44.917 { 00:12:44.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.917 "dma_device_type": 2 00:12:44.917 }, 00:12:44.917 { 00:12:44.917 "dma_device_id": "system", 00:12:44.917 "dma_device_type": 1 00:12:44.917 }, 00:12:44.917 { 00:12:44.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.917 "dma_device_type": 2 00:12:44.917 }, 00:12:44.917 { 00:12:44.917 "dma_device_id": "system", 00:12:44.917 "dma_device_type": 1 00:12:44.917 }, 00:12:44.917 { 00:12:44.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.917 "dma_device_type": 2 00:12:44.917 }, 00:12:44.917 { 00:12:44.917 "dma_device_id": "system", 00:12:44.917 "dma_device_type": 1 00:12:44.917 }, 00:12:44.917 { 00:12:44.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.917 "dma_device_type": 2 00:12:44.917 } 00:12:44.917 ], 00:12:44.917 "driver_specific": { 00:12:44.917 "raid": { 00:12:44.917 "uuid": "9ca14b9f-60eb-44f6-8867-962440becdb8", 00:12:44.917 "strip_size_kb": 0, 00:12:44.917 "state": "online", 00:12:44.917 "raid_level": "raid1", 00:12:44.917 "superblock": true, 00:12:44.917 "num_base_bdevs": 4, 00:12:44.917 "num_base_bdevs_discovered": 4, 00:12:44.917 "num_base_bdevs_operational": 4, 00:12:44.917 "base_bdevs_list": [ 00:12:44.917 { 00:12:44.917 "name": "BaseBdev1", 00:12:44.917 "uuid": "8366c078-65c9-4162-b306-15ea71b4ca3e", 00:12:44.917 "is_configured": true, 00:12:44.917 "data_offset": 2048, 00:12:44.917 "data_size": 63488 00:12:44.917 }, 00:12:44.917 { 00:12:44.917 "name": "BaseBdev2", 00:12:44.917 "uuid": "f34f6808-fca6-4494-84b2-52bf6f3a4b18", 00:12:44.917 "is_configured": true, 00:12:44.917 "data_offset": 2048, 00:12:44.917 "data_size": 63488 00:12:44.917 }, 00:12:44.917 { 00:12:44.917 "name": "BaseBdev3", 00:12:44.917 "uuid": "74a0a4e4-edb6-417a-bc55-e9f906bf2a68", 00:12:44.917 "is_configured": true, 00:12:44.917 "data_offset": 2048, 00:12:44.917 "data_size": 63488 00:12:44.917 }, 00:12:44.917 { 00:12:44.917 "name": "BaseBdev4", 00:12:44.917 "uuid": "b79e5b60-2759-4cf3-9765-3dd4c5ec2976", 00:12:44.917 "is_configured": true, 00:12:44.917 "data_offset": 2048, 00:12:44.917 "data_size": 63488 00:12:44.917 } 00:12:44.917 ] 00:12:44.917 } 00:12:44.917 } 00:12:44.917 }' 00:12:44.917 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:45.176 BaseBdev2 00:12:45.176 BaseBdev3 00:12:45.176 BaseBdev4' 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.176 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.436 [2024-11-26 19:02:11.840582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.436 "name": "Existed_Raid", 00:12:45.436 "uuid": "9ca14b9f-60eb-44f6-8867-962440becdb8", 00:12:45.436 "strip_size_kb": 0, 00:12:45.436 "state": "online", 00:12:45.436 "raid_level": "raid1", 00:12:45.436 "superblock": true, 00:12:45.436 "num_base_bdevs": 4, 00:12:45.436 "num_base_bdevs_discovered": 3, 00:12:45.436 "num_base_bdevs_operational": 3, 00:12:45.436 "base_bdevs_list": [ 00:12:45.436 { 00:12:45.436 "name": null, 00:12:45.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.436 "is_configured": false, 00:12:45.436 "data_offset": 0, 00:12:45.436 "data_size": 63488 00:12:45.436 }, 00:12:45.436 { 00:12:45.436 "name": "BaseBdev2", 00:12:45.436 "uuid": "f34f6808-fca6-4494-84b2-52bf6f3a4b18", 00:12:45.436 "is_configured": true, 00:12:45.436 "data_offset": 2048, 00:12:45.436 "data_size": 63488 00:12:45.436 }, 00:12:45.436 { 00:12:45.436 "name": "BaseBdev3", 00:12:45.436 "uuid": "74a0a4e4-edb6-417a-bc55-e9f906bf2a68", 00:12:45.436 "is_configured": true, 00:12:45.436 "data_offset": 2048, 00:12:45.436 "data_size": 63488 00:12:45.436 }, 00:12:45.436 { 00:12:45.436 "name": "BaseBdev4", 00:12:45.436 "uuid": "b79e5b60-2759-4cf3-9765-3dd4c5ec2976", 00:12:45.436 "is_configured": true, 00:12:45.436 "data_offset": 2048, 00:12:45.436 "data_size": 63488 00:12:45.436 } 00:12:45.436 ] 00:12:45.436 }' 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.436 19:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.002 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:46.002 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.002 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:46.002 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.002 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.002 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.002 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.002 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:46.002 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:46.002 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:46.002 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.002 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.002 [2024-11-26 19:02:12.499505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:46.002 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.002 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:46.002 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.002 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.002 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:46.002 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.002 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.002 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.261 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:46.261 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:46.261 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:46.261 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.261 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.261 [2024-11-26 19:02:12.657301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:46.261 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.261 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:46.261 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.261 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.261 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.261 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:46.261 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.261 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.261 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:46.261 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:46.261 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:46.261 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.261 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.261 [2024-11-26 19:02:12.805095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:46.261 [2024-11-26 19:02:12.805237] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.520 [2024-11-26 19:02:12.896259] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.520 [2024-11-26 19:02:12.896582] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:46.520 [2024-11-26 19:02:12.896618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:46.520 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.520 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:46.520 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.520 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.520 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.520 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.520 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:46.520 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.520 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:46.520 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:46.520 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:46.520 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:46.520 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.520 19:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:46.520 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.520 19:02:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.520 BaseBdev2 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.520 [ 00:12:46.520 { 00:12:46.520 "name": "BaseBdev2", 00:12:46.520 "aliases": [ 00:12:46.520 "33334003-2f66-435b-ace9-9a0751d2e6cd" 00:12:46.520 ], 00:12:46.520 "product_name": "Malloc disk", 00:12:46.520 "block_size": 512, 00:12:46.520 "num_blocks": 65536, 00:12:46.520 "uuid": "33334003-2f66-435b-ace9-9a0751d2e6cd", 00:12:46.520 "assigned_rate_limits": { 00:12:46.520 "rw_ios_per_sec": 0, 00:12:46.520 "rw_mbytes_per_sec": 0, 00:12:46.520 "r_mbytes_per_sec": 0, 00:12:46.520 "w_mbytes_per_sec": 0 00:12:46.520 }, 00:12:46.520 "claimed": false, 00:12:46.520 "zoned": false, 00:12:46.520 "supported_io_types": { 00:12:46.520 "read": true, 00:12:46.520 "write": true, 00:12:46.520 "unmap": true, 00:12:46.520 "flush": true, 00:12:46.520 "reset": true, 00:12:46.520 "nvme_admin": false, 00:12:46.520 "nvme_io": false, 00:12:46.520 "nvme_io_md": false, 00:12:46.520 "write_zeroes": true, 00:12:46.520 "zcopy": true, 00:12:46.520 "get_zone_info": false, 00:12:46.520 "zone_management": false, 00:12:46.520 "zone_append": false, 00:12:46.520 "compare": false, 00:12:46.520 "compare_and_write": false, 00:12:46.520 "abort": true, 00:12:46.520 "seek_hole": false, 00:12:46.520 "seek_data": false, 00:12:46.520 "copy": true, 00:12:46.520 "nvme_iov_md": false 00:12:46.520 }, 00:12:46.520 "memory_domains": [ 00:12:46.520 { 00:12:46.520 "dma_device_id": "system", 00:12:46.520 "dma_device_type": 1 00:12:46.520 }, 00:12:46.520 { 00:12:46.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.520 "dma_device_type": 2 00:12:46.520 } 00:12:46.520 ], 00:12:46.520 "driver_specific": {} 00:12:46.520 } 00:12:46.520 ] 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.520 BaseBdev3 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.520 [ 00:12:46.520 { 00:12:46.520 "name": "BaseBdev3", 00:12:46.520 "aliases": [ 00:12:46.520 "375218ae-a7e8-470f-9897-cb4fb0e1f965" 00:12:46.520 ], 00:12:46.520 "product_name": "Malloc disk", 00:12:46.520 "block_size": 512, 00:12:46.520 "num_blocks": 65536, 00:12:46.520 "uuid": "375218ae-a7e8-470f-9897-cb4fb0e1f965", 00:12:46.520 "assigned_rate_limits": { 00:12:46.520 "rw_ios_per_sec": 0, 00:12:46.520 "rw_mbytes_per_sec": 0, 00:12:46.520 "r_mbytes_per_sec": 0, 00:12:46.520 "w_mbytes_per_sec": 0 00:12:46.520 }, 00:12:46.520 "claimed": false, 00:12:46.520 "zoned": false, 00:12:46.520 "supported_io_types": { 00:12:46.520 "read": true, 00:12:46.520 "write": true, 00:12:46.520 "unmap": true, 00:12:46.520 "flush": true, 00:12:46.520 "reset": true, 00:12:46.520 "nvme_admin": false, 00:12:46.520 "nvme_io": false, 00:12:46.520 "nvme_io_md": false, 00:12:46.520 "write_zeroes": true, 00:12:46.520 "zcopy": true, 00:12:46.520 "get_zone_info": false, 00:12:46.520 "zone_management": false, 00:12:46.520 "zone_append": false, 00:12:46.520 "compare": false, 00:12:46.520 "compare_and_write": false, 00:12:46.520 "abort": true, 00:12:46.520 "seek_hole": false, 00:12:46.520 "seek_data": false, 00:12:46.520 "copy": true, 00:12:46.520 "nvme_iov_md": false 00:12:46.520 }, 00:12:46.520 "memory_domains": [ 00:12:46.520 { 00:12:46.520 "dma_device_id": "system", 00:12:46.520 "dma_device_type": 1 00:12:46.520 }, 00:12:46.520 { 00:12:46.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.520 "dma_device_type": 2 00:12:46.520 } 00:12:46.520 ], 00:12:46.520 "driver_specific": {} 00:12:46.520 } 00:12:46.520 ] 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.520 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.778 BaseBdev4 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.778 [ 00:12:46.778 { 00:12:46.778 "name": "BaseBdev4", 00:12:46.778 "aliases": [ 00:12:46.778 "9e1175dc-5676-4524-aac5-44dd9b67f3b0" 00:12:46.778 ], 00:12:46.778 "product_name": "Malloc disk", 00:12:46.778 "block_size": 512, 00:12:46.778 "num_blocks": 65536, 00:12:46.778 "uuid": "9e1175dc-5676-4524-aac5-44dd9b67f3b0", 00:12:46.778 "assigned_rate_limits": { 00:12:46.778 "rw_ios_per_sec": 0, 00:12:46.778 "rw_mbytes_per_sec": 0, 00:12:46.778 "r_mbytes_per_sec": 0, 00:12:46.778 "w_mbytes_per_sec": 0 00:12:46.778 }, 00:12:46.778 "claimed": false, 00:12:46.778 "zoned": false, 00:12:46.778 "supported_io_types": { 00:12:46.778 "read": true, 00:12:46.778 "write": true, 00:12:46.778 "unmap": true, 00:12:46.778 "flush": true, 00:12:46.778 "reset": true, 00:12:46.778 "nvme_admin": false, 00:12:46.778 "nvme_io": false, 00:12:46.778 "nvme_io_md": false, 00:12:46.778 "write_zeroes": true, 00:12:46.778 "zcopy": true, 00:12:46.778 "get_zone_info": false, 00:12:46.778 "zone_management": false, 00:12:46.778 "zone_append": false, 00:12:46.778 "compare": false, 00:12:46.778 "compare_and_write": false, 00:12:46.778 "abort": true, 00:12:46.778 "seek_hole": false, 00:12:46.778 "seek_data": false, 00:12:46.778 "copy": true, 00:12:46.778 "nvme_iov_md": false 00:12:46.778 }, 00:12:46.778 "memory_domains": [ 00:12:46.778 { 00:12:46.778 "dma_device_id": "system", 00:12:46.778 "dma_device_type": 1 00:12:46.778 }, 00:12:46.778 { 00:12:46.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.778 "dma_device_type": 2 00:12:46.778 } 00:12:46.778 ], 00:12:46.778 "driver_specific": {} 00:12:46.778 } 00:12:46.778 ] 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.778 [2024-11-26 19:02:13.201318] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:46.778 [2024-11-26 19:02:13.201425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:46.778 [2024-11-26 19:02:13.201468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:46.778 [2024-11-26 19:02:13.204162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:46.778 [2024-11-26 19:02:13.204226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:46.778 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.779 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.779 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.779 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.779 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.779 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.779 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.779 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.779 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.779 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.779 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.779 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.779 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.779 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.779 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.779 "name": "Existed_Raid", 00:12:46.779 "uuid": "cf790901-bac1-4a39-8568-df01024da500", 00:12:46.779 "strip_size_kb": 0, 00:12:46.779 "state": "configuring", 00:12:46.779 "raid_level": "raid1", 00:12:46.779 "superblock": true, 00:12:46.779 "num_base_bdevs": 4, 00:12:46.779 "num_base_bdevs_discovered": 3, 00:12:46.779 "num_base_bdevs_operational": 4, 00:12:46.779 "base_bdevs_list": [ 00:12:46.779 { 00:12:46.779 "name": "BaseBdev1", 00:12:46.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.779 "is_configured": false, 00:12:46.779 "data_offset": 0, 00:12:46.779 "data_size": 0 00:12:46.779 }, 00:12:46.779 { 00:12:46.779 "name": "BaseBdev2", 00:12:46.779 "uuid": "33334003-2f66-435b-ace9-9a0751d2e6cd", 00:12:46.779 "is_configured": true, 00:12:46.779 "data_offset": 2048, 00:12:46.779 "data_size": 63488 00:12:46.779 }, 00:12:46.779 { 00:12:46.779 "name": "BaseBdev3", 00:12:46.779 "uuid": "375218ae-a7e8-470f-9897-cb4fb0e1f965", 00:12:46.779 "is_configured": true, 00:12:46.779 "data_offset": 2048, 00:12:46.779 "data_size": 63488 00:12:46.779 }, 00:12:46.779 { 00:12:46.779 "name": "BaseBdev4", 00:12:46.779 "uuid": "9e1175dc-5676-4524-aac5-44dd9b67f3b0", 00:12:46.779 "is_configured": true, 00:12:46.779 "data_offset": 2048, 00:12:46.779 "data_size": 63488 00:12:46.779 } 00:12:46.779 ] 00:12:46.779 }' 00:12:46.779 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.779 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.346 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:47.346 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.346 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.346 [2024-11-26 19:02:13.777512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:47.346 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.347 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:47.347 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.347 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.347 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.347 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.347 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.347 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.347 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.347 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.347 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.347 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.347 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.347 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.347 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.347 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.347 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.347 "name": "Existed_Raid", 00:12:47.347 "uuid": "cf790901-bac1-4a39-8568-df01024da500", 00:12:47.347 "strip_size_kb": 0, 00:12:47.347 "state": "configuring", 00:12:47.347 "raid_level": "raid1", 00:12:47.347 "superblock": true, 00:12:47.347 "num_base_bdevs": 4, 00:12:47.347 "num_base_bdevs_discovered": 2, 00:12:47.347 "num_base_bdevs_operational": 4, 00:12:47.347 "base_bdevs_list": [ 00:12:47.347 { 00:12:47.347 "name": "BaseBdev1", 00:12:47.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.347 "is_configured": false, 00:12:47.347 "data_offset": 0, 00:12:47.347 "data_size": 0 00:12:47.347 }, 00:12:47.347 { 00:12:47.347 "name": null, 00:12:47.347 "uuid": "33334003-2f66-435b-ace9-9a0751d2e6cd", 00:12:47.347 "is_configured": false, 00:12:47.347 "data_offset": 0, 00:12:47.347 "data_size": 63488 00:12:47.347 }, 00:12:47.347 { 00:12:47.347 "name": "BaseBdev3", 00:12:47.347 "uuid": "375218ae-a7e8-470f-9897-cb4fb0e1f965", 00:12:47.347 "is_configured": true, 00:12:47.347 "data_offset": 2048, 00:12:47.347 "data_size": 63488 00:12:47.347 }, 00:12:47.347 { 00:12:47.347 "name": "BaseBdev4", 00:12:47.347 "uuid": "9e1175dc-5676-4524-aac5-44dd9b67f3b0", 00:12:47.347 "is_configured": true, 00:12:47.347 "data_offset": 2048, 00:12:47.347 "data_size": 63488 00:12:47.347 } 00:12:47.347 ] 00:12:47.347 }' 00:12:47.347 19:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.347 19:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.918 19:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:47.918 19:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.918 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.918 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.918 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.918 19:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.919 [2024-11-26 19:02:14.417525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:47.919 BaseBdev1 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.919 [ 00:12:47.919 { 00:12:47.919 "name": "BaseBdev1", 00:12:47.919 "aliases": [ 00:12:47.919 "9864f472-5dc1-4f46-8029-5a6ed1403aca" 00:12:47.919 ], 00:12:47.919 "product_name": "Malloc disk", 00:12:47.919 "block_size": 512, 00:12:47.919 "num_blocks": 65536, 00:12:47.919 "uuid": "9864f472-5dc1-4f46-8029-5a6ed1403aca", 00:12:47.919 "assigned_rate_limits": { 00:12:47.919 "rw_ios_per_sec": 0, 00:12:47.919 "rw_mbytes_per_sec": 0, 00:12:47.919 "r_mbytes_per_sec": 0, 00:12:47.919 "w_mbytes_per_sec": 0 00:12:47.919 }, 00:12:47.919 "claimed": true, 00:12:47.919 "claim_type": "exclusive_write", 00:12:47.919 "zoned": false, 00:12:47.919 "supported_io_types": { 00:12:47.919 "read": true, 00:12:47.919 "write": true, 00:12:47.919 "unmap": true, 00:12:47.919 "flush": true, 00:12:47.919 "reset": true, 00:12:47.919 "nvme_admin": false, 00:12:47.919 "nvme_io": false, 00:12:47.919 "nvme_io_md": false, 00:12:47.919 "write_zeroes": true, 00:12:47.919 "zcopy": true, 00:12:47.919 "get_zone_info": false, 00:12:47.919 "zone_management": false, 00:12:47.919 "zone_append": false, 00:12:47.919 "compare": false, 00:12:47.919 "compare_and_write": false, 00:12:47.919 "abort": true, 00:12:47.919 "seek_hole": false, 00:12:47.919 "seek_data": false, 00:12:47.919 "copy": true, 00:12:47.919 "nvme_iov_md": false 00:12:47.919 }, 00:12:47.919 "memory_domains": [ 00:12:47.919 { 00:12:47.919 "dma_device_id": "system", 00:12:47.919 "dma_device_type": 1 00:12:47.919 }, 00:12:47.919 { 00:12:47.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.919 "dma_device_type": 2 00:12:47.919 } 00:12:47.919 ], 00:12:47.919 "driver_specific": {} 00:12:47.919 } 00:12:47.919 ] 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.919 "name": "Existed_Raid", 00:12:47.919 "uuid": "cf790901-bac1-4a39-8568-df01024da500", 00:12:47.919 "strip_size_kb": 0, 00:12:47.919 "state": "configuring", 00:12:47.919 "raid_level": "raid1", 00:12:47.919 "superblock": true, 00:12:47.919 "num_base_bdevs": 4, 00:12:47.919 "num_base_bdevs_discovered": 3, 00:12:47.919 "num_base_bdevs_operational": 4, 00:12:47.919 "base_bdevs_list": [ 00:12:47.919 { 00:12:47.919 "name": "BaseBdev1", 00:12:47.919 "uuid": "9864f472-5dc1-4f46-8029-5a6ed1403aca", 00:12:47.919 "is_configured": true, 00:12:47.919 "data_offset": 2048, 00:12:47.919 "data_size": 63488 00:12:47.919 }, 00:12:47.919 { 00:12:47.919 "name": null, 00:12:47.919 "uuid": "33334003-2f66-435b-ace9-9a0751d2e6cd", 00:12:47.919 "is_configured": false, 00:12:47.919 "data_offset": 0, 00:12:47.919 "data_size": 63488 00:12:47.919 }, 00:12:47.919 { 00:12:47.919 "name": "BaseBdev3", 00:12:47.919 "uuid": "375218ae-a7e8-470f-9897-cb4fb0e1f965", 00:12:47.919 "is_configured": true, 00:12:47.919 "data_offset": 2048, 00:12:47.919 "data_size": 63488 00:12:47.919 }, 00:12:47.919 { 00:12:47.919 "name": "BaseBdev4", 00:12:47.919 "uuid": "9e1175dc-5676-4524-aac5-44dd9b67f3b0", 00:12:47.919 "is_configured": true, 00:12:47.919 "data_offset": 2048, 00:12:47.919 "data_size": 63488 00:12:47.919 } 00:12:47.919 ] 00:12:47.919 }' 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.919 19:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.491 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:48.491 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.491 19:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.491 19:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.491 19:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.491 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:48.491 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:48.491 19:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.491 19:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.491 [2024-11-26 19:02:15.089825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:48.491 19:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.491 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:48.491 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.491 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.491 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.492 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.492 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.492 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.492 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.492 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.492 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.492 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.492 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.492 19:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.492 19:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.750 19:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.750 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.750 "name": "Existed_Raid", 00:12:48.750 "uuid": "cf790901-bac1-4a39-8568-df01024da500", 00:12:48.750 "strip_size_kb": 0, 00:12:48.750 "state": "configuring", 00:12:48.750 "raid_level": "raid1", 00:12:48.750 "superblock": true, 00:12:48.750 "num_base_bdevs": 4, 00:12:48.750 "num_base_bdevs_discovered": 2, 00:12:48.750 "num_base_bdevs_operational": 4, 00:12:48.750 "base_bdevs_list": [ 00:12:48.750 { 00:12:48.750 "name": "BaseBdev1", 00:12:48.750 "uuid": "9864f472-5dc1-4f46-8029-5a6ed1403aca", 00:12:48.750 "is_configured": true, 00:12:48.750 "data_offset": 2048, 00:12:48.750 "data_size": 63488 00:12:48.750 }, 00:12:48.750 { 00:12:48.750 "name": null, 00:12:48.750 "uuid": "33334003-2f66-435b-ace9-9a0751d2e6cd", 00:12:48.750 "is_configured": false, 00:12:48.750 "data_offset": 0, 00:12:48.750 "data_size": 63488 00:12:48.750 }, 00:12:48.750 { 00:12:48.750 "name": null, 00:12:48.750 "uuid": "375218ae-a7e8-470f-9897-cb4fb0e1f965", 00:12:48.750 "is_configured": false, 00:12:48.750 "data_offset": 0, 00:12:48.750 "data_size": 63488 00:12:48.750 }, 00:12:48.750 { 00:12:48.750 "name": "BaseBdev4", 00:12:48.750 "uuid": "9e1175dc-5676-4524-aac5-44dd9b67f3b0", 00:12:48.750 "is_configured": true, 00:12:48.750 "data_offset": 2048, 00:12:48.750 "data_size": 63488 00:12:48.750 } 00:12:48.750 ] 00:12:48.750 }' 00:12:48.750 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.750 19:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.007 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.007 19:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.007 19:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.007 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:49.007 19:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.264 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:49.264 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:49.264 19:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.264 19:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.264 [2024-11-26 19:02:15.642000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:49.264 19:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.264 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:49.264 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.265 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.265 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.265 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.265 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.265 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.265 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.265 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.265 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.265 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.265 19:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.265 19:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.265 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.265 19:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.265 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.265 "name": "Existed_Raid", 00:12:49.265 "uuid": "cf790901-bac1-4a39-8568-df01024da500", 00:12:49.265 "strip_size_kb": 0, 00:12:49.265 "state": "configuring", 00:12:49.265 "raid_level": "raid1", 00:12:49.265 "superblock": true, 00:12:49.265 "num_base_bdevs": 4, 00:12:49.265 "num_base_bdevs_discovered": 3, 00:12:49.265 "num_base_bdevs_operational": 4, 00:12:49.265 "base_bdevs_list": [ 00:12:49.265 { 00:12:49.265 "name": "BaseBdev1", 00:12:49.265 "uuid": "9864f472-5dc1-4f46-8029-5a6ed1403aca", 00:12:49.265 "is_configured": true, 00:12:49.265 "data_offset": 2048, 00:12:49.265 "data_size": 63488 00:12:49.265 }, 00:12:49.265 { 00:12:49.265 "name": null, 00:12:49.265 "uuid": "33334003-2f66-435b-ace9-9a0751d2e6cd", 00:12:49.265 "is_configured": false, 00:12:49.265 "data_offset": 0, 00:12:49.265 "data_size": 63488 00:12:49.265 }, 00:12:49.265 { 00:12:49.265 "name": "BaseBdev3", 00:12:49.265 "uuid": "375218ae-a7e8-470f-9897-cb4fb0e1f965", 00:12:49.265 "is_configured": true, 00:12:49.265 "data_offset": 2048, 00:12:49.265 "data_size": 63488 00:12:49.265 }, 00:12:49.265 { 00:12:49.265 "name": "BaseBdev4", 00:12:49.265 "uuid": "9e1175dc-5676-4524-aac5-44dd9b67f3b0", 00:12:49.265 "is_configured": true, 00:12:49.265 "data_offset": 2048, 00:12:49.265 "data_size": 63488 00:12:49.265 } 00:12:49.265 ] 00:12:49.265 }' 00:12:49.265 19:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.265 19:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.831 [2024-11-26 19:02:16.222169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.831 "name": "Existed_Raid", 00:12:49.831 "uuid": "cf790901-bac1-4a39-8568-df01024da500", 00:12:49.831 "strip_size_kb": 0, 00:12:49.831 "state": "configuring", 00:12:49.831 "raid_level": "raid1", 00:12:49.831 "superblock": true, 00:12:49.831 "num_base_bdevs": 4, 00:12:49.831 "num_base_bdevs_discovered": 2, 00:12:49.831 "num_base_bdevs_operational": 4, 00:12:49.831 "base_bdevs_list": [ 00:12:49.831 { 00:12:49.831 "name": null, 00:12:49.831 "uuid": "9864f472-5dc1-4f46-8029-5a6ed1403aca", 00:12:49.831 "is_configured": false, 00:12:49.831 "data_offset": 0, 00:12:49.831 "data_size": 63488 00:12:49.831 }, 00:12:49.831 { 00:12:49.831 "name": null, 00:12:49.831 "uuid": "33334003-2f66-435b-ace9-9a0751d2e6cd", 00:12:49.831 "is_configured": false, 00:12:49.831 "data_offset": 0, 00:12:49.831 "data_size": 63488 00:12:49.831 }, 00:12:49.831 { 00:12:49.831 "name": "BaseBdev3", 00:12:49.831 "uuid": "375218ae-a7e8-470f-9897-cb4fb0e1f965", 00:12:49.831 "is_configured": true, 00:12:49.831 "data_offset": 2048, 00:12:49.831 "data_size": 63488 00:12:49.831 }, 00:12:49.831 { 00:12:49.831 "name": "BaseBdev4", 00:12:49.831 "uuid": "9e1175dc-5676-4524-aac5-44dd9b67f3b0", 00:12:49.831 "is_configured": true, 00:12:49.831 "data_offset": 2048, 00:12:49.831 "data_size": 63488 00:12:49.831 } 00:12:49.831 ] 00:12:49.831 }' 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.831 19:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.395 [2024-11-26 19:02:16.894539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.395 "name": "Existed_Raid", 00:12:50.395 "uuid": "cf790901-bac1-4a39-8568-df01024da500", 00:12:50.395 "strip_size_kb": 0, 00:12:50.395 "state": "configuring", 00:12:50.395 "raid_level": "raid1", 00:12:50.395 "superblock": true, 00:12:50.395 "num_base_bdevs": 4, 00:12:50.395 "num_base_bdevs_discovered": 3, 00:12:50.395 "num_base_bdevs_operational": 4, 00:12:50.395 "base_bdevs_list": [ 00:12:50.395 { 00:12:50.395 "name": null, 00:12:50.395 "uuid": "9864f472-5dc1-4f46-8029-5a6ed1403aca", 00:12:50.395 "is_configured": false, 00:12:50.395 "data_offset": 0, 00:12:50.395 "data_size": 63488 00:12:50.395 }, 00:12:50.395 { 00:12:50.395 "name": "BaseBdev2", 00:12:50.395 "uuid": "33334003-2f66-435b-ace9-9a0751d2e6cd", 00:12:50.395 "is_configured": true, 00:12:50.395 "data_offset": 2048, 00:12:50.395 "data_size": 63488 00:12:50.395 }, 00:12:50.395 { 00:12:50.395 "name": "BaseBdev3", 00:12:50.395 "uuid": "375218ae-a7e8-470f-9897-cb4fb0e1f965", 00:12:50.395 "is_configured": true, 00:12:50.395 "data_offset": 2048, 00:12:50.395 "data_size": 63488 00:12:50.395 }, 00:12:50.395 { 00:12:50.395 "name": "BaseBdev4", 00:12:50.395 "uuid": "9e1175dc-5676-4524-aac5-44dd9b67f3b0", 00:12:50.395 "is_configured": true, 00:12:50.395 "data_offset": 2048, 00:12:50.395 "data_size": 63488 00:12:50.395 } 00:12:50.395 ] 00:12:50.395 }' 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.395 19:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.957 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.957 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:50.957 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.957 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.957 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.957 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:50.957 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.957 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.957 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.957 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:50.957 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.957 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9864f472-5dc1-4f46-8029-5a6ed1403aca 00:12:50.957 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.957 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.957 [2024-11-26 19:02:17.578349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:50.957 [2024-11-26 19:02:17.578670] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:50.957 [2024-11-26 19:02:17.578692] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:51.214 [2024-11-26 19:02:17.579076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:51.214 [2024-11-26 19:02:17.579299] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:51.214 [2024-11-26 19:02:17.579315] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:51.214 NewBaseBdev 00:12:51.214 [2024-11-26 19:02:17.579517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.214 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.214 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:51.214 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:51.214 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:51.214 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:51.214 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:51.214 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:51.214 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:51.214 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.214 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.214 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.214 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:51.214 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.214 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.214 [ 00:12:51.214 { 00:12:51.214 "name": "NewBaseBdev", 00:12:51.214 "aliases": [ 00:12:51.214 "9864f472-5dc1-4f46-8029-5a6ed1403aca" 00:12:51.214 ], 00:12:51.214 "product_name": "Malloc disk", 00:12:51.214 "block_size": 512, 00:12:51.214 "num_blocks": 65536, 00:12:51.214 "uuid": "9864f472-5dc1-4f46-8029-5a6ed1403aca", 00:12:51.214 "assigned_rate_limits": { 00:12:51.214 "rw_ios_per_sec": 0, 00:12:51.214 "rw_mbytes_per_sec": 0, 00:12:51.214 "r_mbytes_per_sec": 0, 00:12:51.215 "w_mbytes_per_sec": 0 00:12:51.215 }, 00:12:51.215 "claimed": true, 00:12:51.215 "claim_type": "exclusive_write", 00:12:51.215 "zoned": false, 00:12:51.215 "supported_io_types": { 00:12:51.215 "read": true, 00:12:51.215 "write": true, 00:12:51.215 "unmap": true, 00:12:51.215 "flush": true, 00:12:51.215 "reset": true, 00:12:51.215 "nvme_admin": false, 00:12:51.215 "nvme_io": false, 00:12:51.215 "nvme_io_md": false, 00:12:51.215 "write_zeroes": true, 00:12:51.215 "zcopy": true, 00:12:51.215 "get_zone_info": false, 00:12:51.215 "zone_management": false, 00:12:51.215 "zone_append": false, 00:12:51.215 "compare": false, 00:12:51.215 "compare_and_write": false, 00:12:51.215 "abort": true, 00:12:51.215 "seek_hole": false, 00:12:51.215 "seek_data": false, 00:12:51.215 "copy": true, 00:12:51.215 "nvme_iov_md": false 00:12:51.215 }, 00:12:51.215 "memory_domains": [ 00:12:51.215 { 00:12:51.215 "dma_device_id": "system", 00:12:51.215 "dma_device_type": 1 00:12:51.215 }, 00:12:51.215 { 00:12:51.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.215 "dma_device_type": 2 00:12:51.215 } 00:12:51.215 ], 00:12:51.215 "driver_specific": {} 00:12:51.215 } 00:12:51.215 ] 00:12:51.215 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.215 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:51.215 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:51.215 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.215 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.215 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.215 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.215 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.215 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.215 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.215 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.215 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.215 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.215 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.215 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.215 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.215 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.215 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.215 "name": "Existed_Raid", 00:12:51.215 "uuid": "cf790901-bac1-4a39-8568-df01024da500", 00:12:51.215 "strip_size_kb": 0, 00:12:51.215 "state": "online", 00:12:51.215 "raid_level": "raid1", 00:12:51.215 "superblock": true, 00:12:51.215 "num_base_bdevs": 4, 00:12:51.215 "num_base_bdevs_discovered": 4, 00:12:51.215 "num_base_bdevs_operational": 4, 00:12:51.215 "base_bdevs_list": [ 00:12:51.215 { 00:12:51.215 "name": "NewBaseBdev", 00:12:51.215 "uuid": "9864f472-5dc1-4f46-8029-5a6ed1403aca", 00:12:51.215 "is_configured": true, 00:12:51.215 "data_offset": 2048, 00:12:51.215 "data_size": 63488 00:12:51.215 }, 00:12:51.215 { 00:12:51.215 "name": "BaseBdev2", 00:12:51.215 "uuid": "33334003-2f66-435b-ace9-9a0751d2e6cd", 00:12:51.215 "is_configured": true, 00:12:51.215 "data_offset": 2048, 00:12:51.215 "data_size": 63488 00:12:51.215 }, 00:12:51.215 { 00:12:51.215 "name": "BaseBdev3", 00:12:51.215 "uuid": "375218ae-a7e8-470f-9897-cb4fb0e1f965", 00:12:51.215 "is_configured": true, 00:12:51.215 "data_offset": 2048, 00:12:51.215 "data_size": 63488 00:12:51.215 }, 00:12:51.215 { 00:12:51.215 "name": "BaseBdev4", 00:12:51.215 "uuid": "9e1175dc-5676-4524-aac5-44dd9b67f3b0", 00:12:51.215 "is_configured": true, 00:12:51.215 "data_offset": 2048, 00:12:51.215 "data_size": 63488 00:12:51.215 } 00:12:51.215 ] 00:12:51.215 }' 00:12:51.215 19:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.215 19:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.778 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:51.778 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:51.778 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:51.778 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:51.778 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:51.778 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:51.778 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:51.778 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:51.778 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.778 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.778 [2024-11-26 19:02:18.175070] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:51.778 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.778 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:51.778 "name": "Existed_Raid", 00:12:51.778 "aliases": [ 00:12:51.778 "cf790901-bac1-4a39-8568-df01024da500" 00:12:51.778 ], 00:12:51.778 "product_name": "Raid Volume", 00:12:51.778 "block_size": 512, 00:12:51.778 "num_blocks": 63488, 00:12:51.778 "uuid": "cf790901-bac1-4a39-8568-df01024da500", 00:12:51.778 "assigned_rate_limits": { 00:12:51.778 "rw_ios_per_sec": 0, 00:12:51.778 "rw_mbytes_per_sec": 0, 00:12:51.778 "r_mbytes_per_sec": 0, 00:12:51.778 "w_mbytes_per_sec": 0 00:12:51.778 }, 00:12:51.778 "claimed": false, 00:12:51.778 "zoned": false, 00:12:51.778 "supported_io_types": { 00:12:51.778 "read": true, 00:12:51.778 "write": true, 00:12:51.778 "unmap": false, 00:12:51.778 "flush": false, 00:12:51.778 "reset": true, 00:12:51.778 "nvme_admin": false, 00:12:51.778 "nvme_io": false, 00:12:51.778 "nvme_io_md": false, 00:12:51.778 "write_zeroes": true, 00:12:51.778 "zcopy": false, 00:12:51.778 "get_zone_info": false, 00:12:51.778 "zone_management": false, 00:12:51.778 "zone_append": false, 00:12:51.778 "compare": false, 00:12:51.778 "compare_and_write": false, 00:12:51.778 "abort": false, 00:12:51.778 "seek_hole": false, 00:12:51.778 "seek_data": false, 00:12:51.778 "copy": false, 00:12:51.778 "nvme_iov_md": false 00:12:51.778 }, 00:12:51.778 "memory_domains": [ 00:12:51.778 { 00:12:51.778 "dma_device_id": "system", 00:12:51.778 "dma_device_type": 1 00:12:51.778 }, 00:12:51.778 { 00:12:51.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.778 "dma_device_type": 2 00:12:51.778 }, 00:12:51.778 { 00:12:51.778 "dma_device_id": "system", 00:12:51.778 "dma_device_type": 1 00:12:51.778 }, 00:12:51.778 { 00:12:51.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.778 "dma_device_type": 2 00:12:51.778 }, 00:12:51.778 { 00:12:51.778 "dma_device_id": "system", 00:12:51.778 "dma_device_type": 1 00:12:51.778 }, 00:12:51.778 { 00:12:51.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.778 "dma_device_type": 2 00:12:51.778 }, 00:12:51.778 { 00:12:51.778 "dma_device_id": "system", 00:12:51.778 "dma_device_type": 1 00:12:51.778 }, 00:12:51.778 { 00:12:51.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.778 "dma_device_type": 2 00:12:51.778 } 00:12:51.778 ], 00:12:51.778 "driver_specific": { 00:12:51.778 "raid": { 00:12:51.778 "uuid": "cf790901-bac1-4a39-8568-df01024da500", 00:12:51.778 "strip_size_kb": 0, 00:12:51.778 "state": "online", 00:12:51.778 "raid_level": "raid1", 00:12:51.778 "superblock": true, 00:12:51.778 "num_base_bdevs": 4, 00:12:51.778 "num_base_bdevs_discovered": 4, 00:12:51.778 "num_base_bdevs_operational": 4, 00:12:51.778 "base_bdevs_list": [ 00:12:51.778 { 00:12:51.778 "name": "NewBaseBdev", 00:12:51.778 "uuid": "9864f472-5dc1-4f46-8029-5a6ed1403aca", 00:12:51.778 "is_configured": true, 00:12:51.778 "data_offset": 2048, 00:12:51.778 "data_size": 63488 00:12:51.778 }, 00:12:51.778 { 00:12:51.778 "name": "BaseBdev2", 00:12:51.778 "uuid": "33334003-2f66-435b-ace9-9a0751d2e6cd", 00:12:51.778 "is_configured": true, 00:12:51.778 "data_offset": 2048, 00:12:51.778 "data_size": 63488 00:12:51.778 }, 00:12:51.778 { 00:12:51.778 "name": "BaseBdev3", 00:12:51.778 "uuid": "375218ae-a7e8-470f-9897-cb4fb0e1f965", 00:12:51.778 "is_configured": true, 00:12:51.778 "data_offset": 2048, 00:12:51.778 "data_size": 63488 00:12:51.778 }, 00:12:51.778 { 00:12:51.778 "name": "BaseBdev4", 00:12:51.778 "uuid": "9e1175dc-5676-4524-aac5-44dd9b67f3b0", 00:12:51.778 "is_configured": true, 00:12:51.778 "data_offset": 2048, 00:12:51.778 "data_size": 63488 00:12:51.778 } 00:12:51.778 ] 00:12:51.778 } 00:12:51.778 } 00:12:51.778 }' 00:12:51.778 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:51.779 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:51.779 BaseBdev2 00:12:51.779 BaseBdev3 00:12:51.779 BaseBdev4' 00:12:51.779 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.779 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:51.779 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.779 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.779 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:51.779 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.779 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.779 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.779 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.779 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.779 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.779 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:51.779 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.779 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.779 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.035 [2024-11-26 19:02:18.558753] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:52.035 [2024-11-26 19:02:18.558803] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:52.035 [2024-11-26 19:02:18.558920] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:52.035 [2024-11-26 19:02:18.559311] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:52.035 [2024-11-26 19:02:18.559349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74400 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74400 ']' 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74400 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74400 00:12:52.035 killing process with pid 74400 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74400' 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74400 00:12:52.035 [2024-11-26 19:02:18.596785] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:52.035 19:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74400 00:12:52.600 [2024-11-26 19:02:18.951459] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:53.535 19:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:53.535 00:12:53.535 real 0m13.188s 00:12:53.535 user 0m21.784s 00:12:53.535 sys 0m1.870s 00:12:53.535 ************************************ 00:12:53.535 END TEST raid_state_function_test_sb 00:12:53.535 ************************************ 00:12:53.535 19:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.535 19:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.535 19:02:20 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:53.535 19:02:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:53.535 19:02:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.535 19:02:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:53.535 ************************************ 00:12:53.535 START TEST raid_superblock_test 00:12:53.535 ************************************ 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75083 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75083 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 75083 ']' 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.535 19:02:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.792 [2024-11-26 19:02:20.252212] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:12:53.792 [2024-11-26 19:02:20.252737] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75083 ] 00:12:54.049 [2024-11-26 19:02:20.447216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.049 [2024-11-26 19:02:20.579807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.306 [2024-11-26 19:02:20.804184] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.306 [2024-11-26 19:02:20.804453] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.564 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.564 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:54.564 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:54.564 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:54.564 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:54.565 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:54.565 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:54.565 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:54.565 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:54.565 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:54.565 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:54.565 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.565 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.824 malloc1 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.824 [2024-11-26 19:02:21.223401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:54.824 [2024-11-26 19:02:21.223609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.824 [2024-11-26 19:02:21.223654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:54.824 [2024-11-26 19:02:21.223671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.824 [2024-11-26 19:02:21.226718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.824 [2024-11-26 19:02:21.226760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:54.824 pt1 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.824 malloc2 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.824 [2024-11-26 19:02:21.282420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:54.824 [2024-11-26 19:02:21.282500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.824 [2024-11-26 19:02:21.282538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:54.824 [2024-11-26 19:02:21.282551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.824 [2024-11-26 19:02:21.285568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.824 [2024-11-26 19:02:21.285612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:54.824 pt2 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.824 malloc3 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.824 [2024-11-26 19:02:21.352869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:54.824 [2024-11-26 19:02:21.352948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.824 [2024-11-26 19:02:21.353023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:54.824 [2024-11-26 19:02:21.353039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.824 [2024-11-26 19:02:21.356036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.824 [2024-11-26 19:02:21.356094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:54.824 pt3 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:54.824 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.825 malloc4 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.825 [2024-11-26 19:02:21.411085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:54.825 [2024-11-26 19:02:21.411340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.825 [2024-11-26 19:02:21.411418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:54.825 [2024-11-26 19:02:21.411628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.825 [2024-11-26 19:02:21.414677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.825 [2024-11-26 19:02:21.414840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:54.825 pt4 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.825 [2024-11-26 19:02:21.423133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:54.825 [2024-11-26 19:02:21.425799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:54.825 [2024-11-26 19:02:21.425889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:54.825 [2024-11-26 19:02:21.425978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:54.825 [2024-11-26 19:02:21.426227] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:54.825 [2024-11-26 19:02:21.426248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:54.825 [2024-11-26 19:02:21.426637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:54.825 [2024-11-26 19:02:21.426863] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:54.825 [2024-11-26 19:02:21.426963] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:54.825 [2024-11-26 19:02:21.427203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.825 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.084 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.084 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.084 "name": "raid_bdev1", 00:12:55.084 "uuid": "9c26ff07-7c21-43f4-8c54-86ae7f0d2789", 00:12:55.084 "strip_size_kb": 0, 00:12:55.084 "state": "online", 00:12:55.084 "raid_level": "raid1", 00:12:55.084 "superblock": true, 00:12:55.084 "num_base_bdevs": 4, 00:12:55.084 "num_base_bdevs_discovered": 4, 00:12:55.084 "num_base_bdevs_operational": 4, 00:12:55.084 "base_bdevs_list": [ 00:12:55.084 { 00:12:55.084 "name": "pt1", 00:12:55.084 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:55.084 "is_configured": true, 00:12:55.084 "data_offset": 2048, 00:12:55.084 "data_size": 63488 00:12:55.084 }, 00:12:55.084 { 00:12:55.084 "name": "pt2", 00:12:55.084 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:55.084 "is_configured": true, 00:12:55.084 "data_offset": 2048, 00:12:55.084 "data_size": 63488 00:12:55.084 }, 00:12:55.084 { 00:12:55.084 "name": "pt3", 00:12:55.084 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:55.084 "is_configured": true, 00:12:55.084 "data_offset": 2048, 00:12:55.084 "data_size": 63488 00:12:55.084 }, 00:12:55.084 { 00:12:55.084 "name": "pt4", 00:12:55.084 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:55.084 "is_configured": true, 00:12:55.084 "data_offset": 2048, 00:12:55.084 "data_size": 63488 00:12:55.084 } 00:12:55.084 ] 00:12:55.084 }' 00:12:55.084 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.084 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.342 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:55.342 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:55.342 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:55.342 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:55.342 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:55.342 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:55.342 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:55.342 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.342 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:55.342 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.342 [2024-11-26 19:02:21.951812] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.602 19:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.602 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:55.602 "name": "raid_bdev1", 00:12:55.602 "aliases": [ 00:12:55.602 "9c26ff07-7c21-43f4-8c54-86ae7f0d2789" 00:12:55.602 ], 00:12:55.602 "product_name": "Raid Volume", 00:12:55.602 "block_size": 512, 00:12:55.602 "num_blocks": 63488, 00:12:55.602 "uuid": "9c26ff07-7c21-43f4-8c54-86ae7f0d2789", 00:12:55.602 "assigned_rate_limits": { 00:12:55.602 "rw_ios_per_sec": 0, 00:12:55.602 "rw_mbytes_per_sec": 0, 00:12:55.602 "r_mbytes_per_sec": 0, 00:12:55.602 "w_mbytes_per_sec": 0 00:12:55.602 }, 00:12:55.602 "claimed": false, 00:12:55.602 "zoned": false, 00:12:55.602 "supported_io_types": { 00:12:55.602 "read": true, 00:12:55.602 "write": true, 00:12:55.602 "unmap": false, 00:12:55.602 "flush": false, 00:12:55.602 "reset": true, 00:12:55.602 "nvme_admin": false, 00:12:55.602 "nvme_io": false, 00:12:55.602 "nvme_io_md": false, 00:12:55.602 "write_zeroes": true, 00:12:55.602 "zcopy": false, 00:12:55.602 "get_zone_info": false, 00:12:55.602 "zone_management": false, 00:12:55.602 "zone_append": false, 00:12:55.602 "compare": false, 00:12:55.602 "compare_and_write": false, 00:12:55.602 "abort": false, 00:12:55.602 "seek_hole": false, 00:12:55.602 "seek_data": false, 00:12:55.602 "copy": false, 00:12:55.602 "nvme_iov_md": false 00:12:55.602 }, 00:12:55.602 "memory_domains": [ 00:12:55.602 { 00:12:55.602 "dma_device_id": "system", 00:12:55.602 "dma_device_type": 1 00:12:55.602 }, 00:12:55.602 { 00:12:55.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.602 "dma_device_type": 2 00:12:55.602 }, 00:12:55.602 { 00:12:55.602 "dma_device_id": "system", 00:12:55.602 "dma_device_type": 1 00:12:55.602 }, 00:12:55.602 { 00:12:55.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.602 "dma_device_type": 2 00:12:55.602 }, 00:12:55.602 { 00:12:55.602 "dma_device_id": "system", 00:12:55.602 "dma_device_type": 1 00:12:55.602 }, 00:12:55.602 { 00:12:55.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.602 "dma_device_type": 2 00:12:55.602 }, 00:12:55.602 { 00:12:55.602 "dma_device_id": "system", 00:12:55.602 "dma_device_type": 1 00:12:55.602 }, 00:12:55.602 { 00:12:55.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.602 "dma_device_type": 2 00:12:55.602 } 00:12:55.602 ], 00:12:55.602 "driver_specific": { 00:12:55.602 "raid": { 00:12:55.602 "uuid": "9c26ff07-7c21-43f4-8c54-86ae7f0d2789", 00:12:55.602 "strip_size_kb": 0, 00:12:55.602 "state": "online", 00:12:55.602 "raid_level": "raid1", 00:12:55.602 "superblock": true, 00:12:55.602 "num_base_bdevs": 4, 00:12:55.602 "num_base_bdevs_discovered": 4, 00:12:55.602 "num_base_bdevs_operational": 4, 00:12:55.602 "base_bdevs_list": [ 00:12:55.602 { 00:12:55.602 "name": "pt1", 00:12:55.602 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:55.602 "is_configured": true, 00:12:55.602 "data_offset": 2048, 00:12:55.602 "data_size": 63488 00:12:55.602 }, 00:12:55.602 { 00:12:55.602 "name": "pt2", 00:12:55.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:55.602 "is_configured": true, 00:12:55.602 "data_offset": 2048, 00:12:55.602 "data_size": 63488 00:12:55.602 }, 00:12:55.602 { 00:12:55.602 "name": "pt3", 00:12:55.602 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:55.602 "is_configured": true, 00:12:55.602 "data_offset": 2048, 00:12:55.602 "data_size": 63488 00:12:55.602 }, 00:12:55.602 { 00:12:55.602 "name": "pt4", 00:12:55.602 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:55.602 "is_configured": true, 00:12:55.602 "data_offset": 2048, 00:12:55.602 "data_size": 63488 00:12:55.602 } 00:12:55.602 ] 00:12:55.602 } 00:12:55.602 } 00:12:55.602 }' 00:12:55.602 19:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:55.602 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:55.602 pt2 00:12:55.602 pt3 00:12:55.602 pt4' 00:12:55.602 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.602 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:55.602 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.602 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:55.602 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.602 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.602 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.602 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.602 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.602 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.602 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.602 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:55.602 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.603 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.603 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.603 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.603 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.603 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.603 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.603 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:55.603 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.603 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.603 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.603 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.860 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.860 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.860 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.860 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:55.860 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.860 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.860 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.860 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.860 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.860 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.860 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:55.860 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:55.860 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.860 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.860 [2024-11-26 19:02:22.319776] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.860 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.860 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9c26ff07-7c21-43f4-8c54-86ae7f0d2789 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9c26ff07-7c21-43f4-8c54-86ae7f0d2789 ']' 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.861 [2024-11-26 19:02:22.371394] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:55.861 [2024-11-26 19:02:22.371421] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.861 [2024-11-26 19:02:22.371519] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.861 [2024-11-26 19:02:22.371637] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:55.861 [2024-11-26 19:02:22.371705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.861 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.118 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.118 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:56.118 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:56.118 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:56.118 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:56.118 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:56.118 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:56.118 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:56.118 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:56.118 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:56.118 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.118 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.118 [2024-11-26 19:02:22.527430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:56.118 [2024-11-26 19:02:22.530088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:56.118 [2024-11-26 19:02:22.530347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:56.118 [2024-11-26 19:02:22.530421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:56.118 [2024-11-26 19:02:22.530500] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:56.118 [2024-11-26 19:02:22.530570] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:56.118 [2024-11-26 19:02:22.530603] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:56.119 [2024-11-26 19:02:22.530633] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:56.119 [2024-11-26 19:02:22.530653] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:56.119 [2024-11-26 19:02:22.530669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:56.119 request: 00:12:56.119 { 00:12:56.119 "name": "raid_bdev1", 00:12:56.119 "raid_level": "raid1", 00:12:56.119 "base_bdevs": [ 00:12:56.119 "malloc1", 00:12:56.119 "malloc2", 00:12:56.119 "malloc3", 00:12:56.119 "malloc4" 00:12:56.119 ], 00:12:56.119 "superblock": false, 00:12:56.119 "method": "bdev_raid_create", 00:12:56.119 "req_id": 1 00:12:56.119 } 00:12:56.119 Got JSON-RPC error response 00:12:56.119 response: 00:12:56.119 { 00:12:56.119 "code": -17, 00:12:56.119 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:56.119 } 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.119 [2024-11-26 19:02:22.583495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:56.119 [2024-11-26 19:02:22.583700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.119 [2024-11-26 19:02:22.583765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:56.119 [2024-11-26 19:02:22.583881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.119 [2024-11-26 19:02:22.586957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.119 [2024-11-26 19:02:22.587170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:56.119 [2024-11-26 19:02:22.587390] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:56.119 [2024-11-26 19:02:22.587570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:56.119 pt1 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.119 "name": "raid_bdev1", 00:12:56.119 "uuid": "9c26ff07-7c21-43f4-8c54-86ae7f0d2789", 00:12:56.119 "strip_size_kb": 0, 00:12:56.119 "state": "configuring", 00:12:56.119 "raid_level": "raid1", 00:12:56.119 "superblock": true, 00:12:56.119 "num_base_bdevs": 4, 00:12:56.119 "num_base_bdevs_discovered": 1, 00:12:56.119 "num_base_bdevs_operational": 4, 00:12:56.119 "base_bdevs_list": [ 00:12:56.119 { 00:12:56.119 "name": "pt1", 00:12:56.119 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:56.119 "is_configured": true, 00:12:56.119 "data_offset": 2048, 00:12:56.119 "data_size": 63488 00:12:56.119 }, 00:12:56.119 { 00:12:56.119 "name": null, 00:12:56.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.119 "is_configured": false, 00:12:56.119 "data_offset": 2048, 00:12:56.119 "data_size": 63488 00:12:56.119 }, 00:12:56.119 { 00:12:56.119 "name": null, 00:12:56.119 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:56.119 "is_configured": false, 00:12:56.119 "data_offset": 2048, 00:12:56.119 "data_size": 63488 00:12:56.119 }, 00:12:56.119 { 00:12:56.119 "name": null, 00:12:56.119 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:56.119 "is_configured": false, 00:12:56.119 "data_offset": 2048, 00:12:56.119 "data_size": 63488 00:12:56.119 } 00:12:56.119 ] 00:12:56.119 }' 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.119 19:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.688 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:56.688 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:56.688 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.688 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.688 [2024-11-26 19:02:23.108137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:56.688 [2024-11-26 19:02:23.108247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.688 [2024-11-26 19:02:23.108281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:56.688 [2024-11-26 19:02:23.108343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.688 [2024-11-26 19:02:23.109034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.688 [2024-11-26 19:02:23.109088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:56.688 [2024-11-26 19:02:23.109203] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:56.688 [2024-11-26 19:02:23.109244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:56.688 pt2 00:12:56.688 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.688 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:56.688 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.688 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.688 [2024-11-26 19:02:23.116087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:56.688 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.688 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:56.688 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.688 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.688 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.688 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.688 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.688 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.688 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.688 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.688 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.688 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.688 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.689 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.689 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.689 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.689 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.689 "name": "raid_bdev1", 00:12:56.689 "uuid": "9c26ff07-7c21-43f4-8c54-86ae7f0d2789", 00:12:56.689 "strip_size_kb": 0, 00:12:56.689 "state": "configuring", 00:12:56.689 "raid_level": "raid1", 00:12:56.689 "superblock": true, 00:12:56.689 "num_base_bdevs": 4, 00:12:56.689 "num_base_bdevs_discovered": 1, 00:12:56.689 "num_base_bdevs_operational": 4, 00:12:56.689 "base_bdevs_list": [ 00:12:56.689 { 00:12:56.689 "name": "pt1", 00:12:56.689 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:56.689 "is_configured": true, 00:12:56.689 "data_offset": 2048, 00:12:56.689 "data_size": 63488 00:12:56.689 }, 00:12:56.689 { 00:12:56.689 "name": null, 00:12:56.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.689 "is_configured": false, 00:12:56.689 "data_offset": 0, 00:12:56.689 "data_size": 63488 00:12:56.689 }, 00:12:56.689 { 00:12:56.689 "name": null, 00:12:56.689 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:56.689 "is_configured": false, 00:12:56.689 "data_offset": 2048, 00:12:56.689 "data_size": 63488 00:12:56.689 }, 00:12:56.689 { 00:12:56.689 "name": null, 00:12:56.689 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:56.689 "is_configured": false, 00:12:56.689 "data_offset": 2048, 00:12:56.689 "data_size": 63488 00:12:56.689 } 00:12:56.689 ] 00:12:56.689 }' 00:12:56.689 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.689 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.257 [2024-11-26 19:02:23.640255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:57.257 [2024-11-26 19:02:23.640363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.257 [2024-11-26 19:02:23.640397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:57.257 [2024-11-26 19:02:23.640412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.257 [2024-11-26 19:02:23.641111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.257 [2024-11-26 19:02:23.641136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:57.257 [2024-11-26 19:02:23.641249] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:57.257 [2024-11-26 19:02:23.641299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:57.257 pt2 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.257 [2024-11-26 19:02:23.648204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:57.257 [2024-11-26 19:02:23.648277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.257 [2024-11-26 19:02:23.648338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:57.257 [2024-11-26 19:02:23.648355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.257 [2024-11-26 19:02:23.648821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.257 [2024-11-26 19:02:23.648863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:57.257 [2024-11-26 19:02:23.648951] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:57.257 [2024-11-26 19:02:23.649006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:57.257 pt3 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.257 [2024-11-26 19:02:23.656192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:57.257 [2024-11-26 19:02:23.656277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.257 [2024-11-26 19:02:23.656351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:57.257 [2024-11-26 19:02:23.656367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.257 [2024-11-26 19:02:23.656903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.257 [2024-11-26 19:02:23.656943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:57.257 [2024-11-26 19:02:23.657052] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:57.257 [2024-11-26 19:02:23.657102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:57.257 [2024-11-26 19:02:23.657321] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:57.257 [2024-11-26 19:02:23.657338] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:57.257 [2024-11-26 19:02:23.657670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:57.257 [2024-11-26 19:02:23.657934] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:57.257 [2024-11-26 19:02:23.657962] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:57.257 [2024-11-26 19:02:23.658138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.257 pt4 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.257 "name": "raid_bdev1", 00:12:57.257 "uuid": "9c26ff07-7c21-43f4-8c54-86ae7f0d2789", 00:12:57.257 "strip_size_kb": 0, 00:12:57.257 "state": "online", 00:12:57.257 "raid_level": "raid1", 00:12:57.257 "superblock": true, 00:12:57.257 "num_base_bdevs": 4, 00:12:57.257 "num_base_bdevs_discovered": 4, 00:12:57.257 "num_base_bdevs_operational": 4, 00:12:57.257 "base_bdevs_list": [ 00:12:57.257 { 00:12:57.257 "name": "pt1", 00:12:57.257 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.257 "is_configured": true, 00:12:57.257 "data_offset": 2048, 00:12:57.257 "data_size": 63488 00:12:57.257 }, 00:12:57.257 { 00:12:57.257 "name": "pt2", 00:12:57.257 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.257 "is_configured": true, 00:12:57.257 "data_offset": 2048, 00:12:57.257 "data_size": 63488 00:12:57.257 }, 00:12:57.257 { 00:12:57.257 "name": "pt3", 00:12:57.257 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:57.257 "is_configured": true, 00:12:57.257 "data_offset": 2048, 00:12:57.257 "data_size": 63488 00:12:57.257 }, 00:12:57.257 { 00:12:57.257 "name": "pt4", 00:12:57.257 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:57.257 "is_configured": true, 00:12:57.257 "data_offset": 2048, 00:12:57.257 "data_size": 63488 00:12:57.257 } 00:12:57.257 ] 00:12:57.257 }' 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.257 19:02:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.825 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:57.825 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:57.825 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:57.825 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:57.825 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:57.825 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:57.825 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:57.825 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.825 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.825 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:57.825 [2024-11-26 19:02:24.156953] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.825 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.825 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:57.825 "name": "raid_bdev1", 00:12:57.825 "aliases": [ 00:12:57.825 "9c26ff07-7c21-43f4-8c54-86ae7f0d2789" 00:12:57.825 ], 00:12:57.825 "product_name": "Raid Volume", 00:12:57.825 "block_size": 512, 00:12:57.825 "num_blocks": 63488, 00:12:57.825 "uuid": "9c26ff07-7c21-43f4-8c54-86ae7f0d2789", 00:12:57.825 "assigned_rate_limits": { 00:12:57.825 "rw_ios_per_sec": 0, 00:12:57.825 "rw_mbytes_per_sec": 0, 00:12:57.825 "r_mbytes_per_sec": 0, 00:12:57.825 "w_mbytes_per_sec": 0 00:12:57.825 }, 00:12:57.825 "claimed": false, 00:12:57.825 "zoned": false, 00:12:57.825 "supported_io_types": { 00:12:57.825 "read": true, 00:12:57.825 "write": true, 00:12:57.825 "unmap": false, 00:12:57.825 "flush": false, 00:12:57.825 "reset": true, 00:12:57.825 "nvme_admin": false, 00:12:57.825 "nvme_io": false, 00:12:57.825 "nvme_io_md": false, 00:12:57.825 "write_zeroes": true, 00:12:57.825 "zcopy": false, 00:12:57.825 "get_zone_info": false, 00:12:57.825 "zone_management": false, 00:12:57.825 "zone_append": false, 00:12:57.825 "compare": false, 00:12:57.825 "compare_and_write": false, 00:12:57.825 "abort": false, 00:12:57.825 "seek_hole": false, 00:12:57.825 "seek_data": false, 00:12:57.825 "copy": false, 00:12:57.825 "nvme_iov_md": false 00:12:57.825 }, 00:12:57.825 "memory_domains": [ 00:12:57.825 { 00:12:57.825 "dma_device_id": "system", 00:12:57.825 "dma_device_type": 1 00:12:57.825 }, 00:12:57.825 { 00:12:57.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.825 "dma_device_type": 2 00:12:57.825 }, 00:12:57.825 { 00:12:57.825 "dma_device_id": "system", 00:12:57.825 "dma_device_type": 1 00:12:57.825 }, 00:12:57.825 { 00:12:57.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.825 "dma_device_type": 2 00:12:57.825 }, 00:12:57.825 { 00:12:57.825 "dma_device_id": "system", 00:12:57.825 "dma_device_type": 1 00:12:57.825 }, 00:12:57.825 { 00:12:57.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.825 "dma_device_type": 2 00:12:57.825 }, 00:12:57.825 { 00:12:57.825 "dma_device_id": "system", 00:12:57.825 "dma_device_type": 1 00:12:57.825 }, 00:12:57.825 { 00:12:57.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.825 "dma_device_type": 2 00:12:57.825 } 00:12:57.825 ], 00:12:57.825 "driver_specific": { 00:12:57.825 "raid": { 00:12:57.825 "uuid": "9c26ff07-7c21-43f4-8c54-86ae7f0d2789", 00:12:57.825 "strip_size_kb": 0, 00:12:57.825 "state": "online", 00:12:57.825 "raid_level": "raid1", 00:12:57.825 "superblock": true, 00:12:57.825 "num_base_bdevs": 4, 00:12:57.825 "num_base_bdevs_discovered": 4, 00:12:57.825 "num_base_bdevs_operational": 4, 00:12:57.825 "base_bdevs_list": [ 00:12:57.825 { 00:12:57.825 "name": "pt1", 00:12:57.825 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.825 "is_configured": true, 00:12:57.825 "data_offset": 2048, 00:12:57.825 "data_size": 63488 00:12:57.825 }, 00:12:57.825 { 00:12:57.825 "name": "pt2", 00:12:57.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.825 "is_configured": true, 00:12:57.825 "data_offset": 2048, 00:12:57.825 "data_size": 63488 00:12:57.825 }, 00:12:57.825 { 00:12:57.825 "name": "pt3", 00:12:57.825 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:57.825 "is_configured": true, 00:12:57.825 "data_offset": 2048, 00:12:57.825 "data_size": 63488 00:12:57.825 }, 00:12:57.825 { 00:12:57.825 "name": "pt4", 00:12:57.825 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:57.825 "is_configured": true, 00:12:57.825 "data_offset": 2048, 00:12:57.825 "data_size": 63488 00:12:57.825 } 00:12:57.825 ] 00:12:57.825 } 00:12:57.825 } 00:12:57.825 }' 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:57.826 pt2 00:12:57.826 pt3 00:12:57.826 pt4' 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.826 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.084 [2024-11-26 19:02:24.524956] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9c26ff07-7c21-43f4-8c54-86ae7f0d2789 '!=' 9c26ff07-7c21-43f4-8c54-86ae7f0d2789 ']' 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.084 [2024-11-26 19:02:24.568624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.084 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.084 "name": "raid_bdev1", 00:12:58.084 "uuid": "9c26ff07-7c21-43f4-8c54-86ae7f0d2789", 00:12:58.084 "strip_size_kb": 0, 00:12:58.084 "state": "online", 00:12:58.084 "raid_level": "raid1", 00:12:58.084 "superblock": true, 00:12:58.084 "num_base_bdevs": 4, 00:12:58.084 "num_base_bdevs_discovered": 3, 00:12:58.084 "num_base_bdevs_operational": 3, 00:12:58.084 "base_bdevs_list": [ 00:12:58.084 { 00:12:58.084 "name": null, 00:12:58.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.084 "is_configured": false, 00:12:58.084 "data_offset": 0, 00:12:58.084 "data_size": 63488 00:12:58.084 }, 00:12:58.084 { 00:12:58.084 "name": "pt2", 00:12:58.084 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.085 "is_configured": true, 00:12:58.085 "data_offset": 2048, 00:12:58.085 "data_size": 63488 00:12:58.085 }, 00:12:58.085 { 00:12:58.085 "name": "pt3", 00:12:58.085 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:58.085 "is_configured": true, 00:12:58.085 "data_offset": 2048, 00:12:58.085 "data_size": 63488 00:12:58.085 }, 00:12:58.085 { 00:12:58.085 "name": "pt4", 00:12:58.085 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:58.085 "is_configured": true, 00:12:58.085 "data_offset": 2048, 00:12:58.085 "data_size": 63488 00:12:58.085 } 00:12:58.085 ] 00:12:58.085 }' 00:12:58.085 19:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.085 19:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.653 [2024-11-26 19:02:25.096830] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:58.653 [2024-11-26 19:02:25.096907] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:58.653 [2024-11-26 19:02:25.097054] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.653 [2024-11-26 19:02:25.097177] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:58.653 [2024-11-26 19:02:25.097195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.653 [2024-11-26 19:02:25.196785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:58.653 [2024-11-26 19:02:25.196888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.653 [2024-11-26 19:02:25.196921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:58.653 [2024-11-26 19:02:25.196936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.653 [2024-11-26 19:02:25.200158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.653 [2024-11-26 19:02:25.200206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:58.653 [2024-11-26 19:02:25.200389] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:58.653 [2024-11-26 19:02:25.200464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:58.653 pt2 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.653 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.654 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.654 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.654 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.654 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.654 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.654 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.654 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.654 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.654 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.654 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.654 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.654 "name": "raid_bdev1", 00:12:58.654 "uuid": "9c26ff07-7c21-43f4-8c54-86ae7f0d2789", 00:12:58.654 "strip_size_kb": 0, 00:12:58.654 "state": "configuring", 00:12:58.654 "raid_level": "raid1", 00:12:58.654 "superblock": true, 00:12:58.654 "num_base_bdevs": 4, 00:12:58.654 "num_base_bdevs_discovered": 1, 00:12:58.654 "num_base_bdevs_operational": 3, 00:12:58.654 "base_bdevs_list": [ 00:12:58.654 { 00:12:58.654 "name": null, 00:12:58.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.654 "is_configured": false, 00:12:58.654 "data_offset": 2048, 00:12:58.654 "data_size": 63488 00:12:58.654 }, 00:12:58.654 { 00:12:58.654 "name": "pt2", 00:12:58.654 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.654 "is_configured": true, 00:12:58.654 "data_offset": 2048, 00:12:58.654 "data_size": 63488 00:12:58.654 }, 00:12:58.654 { 00:12:58.654 "name": null, 00:12:58.654 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:58.654 "is_configured": false, 00:12:58.654 "data_offset": 2048, 00:12:58.654 "data_size": 63488 00:12:58.654 }, 00:12:58.654 { 00:12:58.654 "name": null, 00:12:58.654 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:58.654 "is_configured": false, 00:12:58.654 "data_offset": 2048, 00:12:58.654 "data_size": 63488 00:12:58.654 } 00:12:58.654 ] 00:12:58.654 }' 00:12:58.654 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.654 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.221 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:59.221 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:59.221 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:59.221 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.221 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.221 [2024-11-26 19:02:25.720966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:59.221 [2024-11-26 19:02:25.721095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.221 [2024-11-26 19:02:25.721134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:59.221 [2024-11-26 19:02:25.721151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.221 [2024-11-26 19:02:25.721830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.221 [2024-11-26 19:02:25.721863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:59.221 [2024-11-26 19:02:25.722017] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:59.221 [2024-11-26 19:02:25.722060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:59.221 pt3 00:12:59.221 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.221 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:59.221 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.221 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.221 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.221 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.221 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.221 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.221 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.221 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.221 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.221 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.221 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.221 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.221 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.221 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.221 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.221 "name": "raid_bdev1", 00:12:59.221 "uuid": "9c26ff07-7c21-43f4-8c54-86ae7f0d2789", 00:12:59.222 "strip_size_kb": 0, 00:12:59.222 "state": "configuring", 00:12:59.222 "raid_level": "raid1", 00:12:59.222 "superblock": true, 00:12:59.222 "num_base_bdevs": 4, 00:12:59.222 "num_base_bdevs_discovered": 2, 00:12:59.222 "num_base_bdevs_operational": 3, 00:12:59.222 "base_bdevs_list": [ 00:12:59.222 { 00:12:59.222 "name": null, 00:12:59.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.222 "is_configured": false, 00:12:59.222 "data_offset": 2048, 00:12:59.222 "data_size": 63488 00:12:59.222 }, 00:12:59.222 { 00:12:59.222 "name": "pt2", 00:12:59.222 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.222 "is_configured": true, 00:12:59.222 "data_offset": 2048, 00:12:59.222 "data_size": 63488 00:12:59.222 }, 00:12:59.222 { 00:12:59.222 "name": "pt3", 00:12:59.222 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:59.222 "is_configured": true, 00:12:59.222 "data_offset": 2048, 00:12:59.222 "data_size": 63488 00:12:59.222 }, 00:12:59.222 { 00:12:59.222 "name": null, 00:12:59.222 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:59.222 "is_configured": false, 00:12:59.222 "data_offset": 2048, 00:12:59.222 "data_size": 63488 00:12:59.222 } 00:12:59.222 ] 00:12:59.222 }' 00:12:59.222 19:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.222 19:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.788 [2024-11-26 19:02:26.261164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:59.788 [2024-11-26 19:02:26.261280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.788 [2024-11-26 19:02:26.261357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:59.788 [2024-11-26 19:02:26.261386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.788 [2024-11-26 19:02:26.262442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.788 [2024-11-26 19:02:26.262501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:59.788 [2024-11-26 19:02:26.262722] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:59.788 [2024-11-26 19:02:26.262785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:59.788 [2024-11-26 19:02:26.263131] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:59.788 [2024-11-26 19:02:26.263174] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:59.788 [2024-11-26 19:02:26.263722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:59.788 [2024-11-26 19:02:26.263995] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:59.788 [2024-11-26 19:02:26.264024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:59.788 [2024-11-26 19:02:26.264245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.788 pt4 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.788 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.788 "name": "raid_bdev1", 00:12:59.788 "uuid": "9c26ff07-7c21-43f4-8c54-86ae7f0d2789", 00:12:59.788 "strip_size_kb": 0, 00:12:59.788 "state": "online", 00:12:59.788 "raid_level": "raid1", 00:12:59.788 "superblock": true, 00:12:59.788 "num_base_bdevs": 4, 00:12:59.788 "num_base_bdevs_discovered": 3, 00:12:59.788 "num_base_bdevs_operational": 3, 00:12:59.788 "base_bdevs_list": [ 00:12:59.788 { 00:12:59.788 "name": null, 00:12:59.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.788 "is_configured": false, 00:12:59.789 "data_offset": 2048, 00:12:59.789 "data_size": 63488 00:12:59.789 }, 00:12:59.789 { 00:12:59.789 "name": "pt2", 00:12:59.789 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.789 "is_configured": true, 00:12:59.789 "data_offset": 2048, 00:12:59.789 "data_size": 63488 00:12:59.789 }, 00:12:59.789 { 00:12:59.789 "name": "pt3", 00:12:59.789 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:59.789 "is_configured": true, 00:12:59.789 "data_offset": 2048, 00:12:59.789 "data_size": 63488 00:12:59.789 }, 00:12:59.789 { 00:12:59.789 "name": "pt4", 00:12:59.789 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:59.789 "is_configured": true, 00:12:59.789 "data_offset": 2048, 00:12:59.789 "data_size": 63488 00:12:59.789 } 00:12:59.789 ] 00:12:59.789 }' 00:12:59.789 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.789 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.356 [2024-11-26 19:02:26.741193] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:00.356 [2024-11-26 19:02:26.741232] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:00.356 [2024-11-26 19:02:26.741380] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:00.356 [2024-11-26 19:02:26.741492] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:00.356 [2024-11-26 19:02:26.741514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.356 [2024-11-26 19:02:26.809186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:00.356 [2024-11-26 19:02:26.809271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.356 [2024-11-26 19:02:26.809371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:13:00.356 [2024-11-26 19:02:26.809393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.356 [2024-11-26 19:02:26.812563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.356 [2024-11-26 19:02:26.812617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:00.356 [2024-11-26 19:02:26.812772] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:00.356 [2024-11-26 19:02:26.812840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:00.356 [2024-11-26 19:02:26.813060] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:00.356 [2024-11-26 19:02:26.813086] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:00.356 [2024-11-26 19:02:26.813115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:00.356 [2024-11-26 19:02:26.813199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:00.356 [2024-11-26 19:02:26.813383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:00.356 pt1 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.356 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.356 "name": "raid_bdev1", 00:13:00.356 "uuid": "9c26ff07-7c21-43f4-8c54-86ae7f0d2789", 00:13:00.356 "strip_size_kb": 0, 00:13:00.356 "state": "configuring", 00:13:00.356 "raid_level": "raid1", 00:13:00.356 "superblock": true, 00:13:00.356 "num_base_bdevs": 4, 00:13:00.356 "num_base_bdevs_discovered": 2, 00:13:00.356 "num_base_bdevs_operational": 3, 00:13:00.356 "base_bdevs_list": [ 00:13:00.356 { 00:13:00.356 "name": null, 00:13:00.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.356 "is_configured": false, 00:13:00.356 "data_offset": 2048, 00:13:00.356 "data_size": 63488 00:13:00.356 }, 00:13:00.356 { 00:13:00.356 "name": "pt2", 00:13:00.356 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:00.356 "is_configured": true, 00:13:00.356 "data_offset": 2048, 00:13:00.356 "data_size": 63488 00:13:00.356 }, 00:13:00.356 { 00:13:00.356 "name": "pt3", 00:13:00.356 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:00.356 "is_configured": true, 00:13:00.356 "data_offset": 2048, 00:13:00.356 "data_size": 63488 00:13:00.356 }, 00:13:00.356 { 00:13:00.356 "name": null, 00:13:00.356 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:00.356 "is_configured": false, 00:13:00.356 "data_offset": 2048, 00:13:00.357 "data_size": 63488 00:13:00.357 } 00:13:00.357 ] 00:13:00.357 }' 00:13:00.357 19:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.357 19:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.924 [2024-11-26 19:02:27.393784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:00.924 [2024-11-26 19:02:27.393885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.924 [2024-11-26 19:02:27.393922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:00.924 [2024-11-26 19:02:27.393937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.924 [2024-11-26 19:02:27.394603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.924 [2024-11-26 19:02:27.394631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:00.924 [2024-11-26 19:02:27.394780] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:00.924 [2024-11-26 19:02:27.394871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:00.924 [2024-11-26 19:02:27.395068] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:00.924 [2024-11-26 19:02:27.395085] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:00.924 [2024-11-26 19:02:27.395472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:00.924 [2024-11-26 19:02:27.395725] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:00.924 [2024-11-26 19:02:27.395747] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:00.924 [2024-11-26 19:02:27.395924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.924 pt4 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.924 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.924 "name": "raid_bdev1", 00:13:00.924 "uuid": "9c26ff07-7c21-43f4-8c54-86ae7f0d2789", 00:13:00.924 "strip_size_kb": 0, 00:13:00.924 "state": "online", 00:13:00.924 "raid_level": "raid1", 00:13:00.924 "superblock": true, 00:13:00.924 "num_base_bdevs": 4, 00:13:00.924 "num_base_bdevs_discovered": 3, 00:13:00.924 "num_base_bdevs_operational": 3, 00:13:00.924 "base_bdevs_list": [ 00:13:00.924 { 00:13:00.924 "name": null, 00:13:00.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.924 "is_configured": false, 00:13:00.924 "data_offset": 2048, 00:13:00.924 "data_size": 63488 00:13:00.924 }, 00:13:00.924 { 00:13:00.924 "name": "pt2", 00:13:00.925 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:00.925 "is_configured": true, 00:13:00.925 "data_offset": 2048, 00:13:00.925 "data_size": 63488 00:13:00.925 }, 00:13:00.925 { 00:13:00.925 "name": "pt3", 00:13:00.925 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:00.925 "is_configured": true, 00:13:00.925 "data_offset": 2048, 00:13:00.925 "data_size": 63488 00:13:00.925 }, 00:13:00.925 { 00:13:00.925 "name": "pt4", 00:13:00.925 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:00.925 "is_configured": true, 00:13:00.925 "data_offset": 2048, 00:13:00.925 "data_size": 63488 00:13:00.925 } 00:13:00.925 ] 00:13:00.925 }' 00:13:00.925 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.925 19:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.492 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:01.492 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:01.492 19:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.492 19:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.492 19:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.492 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:01.492 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:01.492 19:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.492 19:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.492 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:01.492 [2024-11-26 19:02:27.962378] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.492 19:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.492 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9c26ff07-7c21-43f4-8c54-86ae7f0d2789 '!=' 9c26ff07-7c21-43f4-8c54-86ae7f0d2789 ']' 00:13:01.492 19:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75083 00:13:01.492 19:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 75083 ']' 00:13:01.492 19:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 75083 00:13:01.492 19:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:01.492 19:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:01.492 19:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75083 00:13:01.492 killing process with pid 75083 00:13:01.492 19:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:01.492 19:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:01.492 19:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75083' 00:13:01.492 19:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 75083 00:13:01.492 [2024-11-26 19:02:28.035184] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:01.492 19:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 75083 00:13:01.492 [2024-11-26 19:02:28.035421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:01.492 [2024-11-26 19:02:28.035627] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:01.492 [2024-11-26 19:02:28.035678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:02.059 [2024-11-26 19:02:28.428363] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:02.992 ************************************ 00:13:02.992 END TEST raid_superblock_test 00:13:02.992 ************************************ 00:13:02.992 19:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:02.992 00:13:02.993 real 0m9.386s 00:13:02.993 user 0m15.246s 00:13:02.993 sys 0m1.433s 00:13:02.993 19:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.993 19:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.993 19:02:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:13:02.993 19:02:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:02.993 19:02:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.993 19:02:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:02.993 ************************************ 00:13:02.993 START TEST raid_read_error_test 00:13:02.993 ************************************ 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.msGeIcinmm 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75582 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75582 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75582 ']' 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.993 19:02:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.250 [2024-11-26 19:02:29.698384] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:13:03.250 [2024-11-26 19:02:29.698561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75582 ] 00:13:03.508 [2024-11-26 19:02:29.885969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.508 [2024-11-26 19:02:30.025763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.767 [2024-11-26 19:02:30.240765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.767 [2024-11-26 19:02:30.240860] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.334 BaseBdev1_malloc 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.334 true 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.334 [2024-11-26 19:02:30.741782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:04.334 [2024-11-26 19:02:30.741858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.334 [2024-11-26 19:02:30.741889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:04.334 [2024-11-26 19:02:30.741907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.334 [2024-11-26 19:02:30.745143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.334 [2024-11-26 19:02:30.745199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:04.334 BaseBdev1 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.334 BaseBdev2_malloc 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.334 true 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.334 [2024-11-26 19:02:30.811483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:04.334 [2024-11-26 19:02:30.811558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.334 [2024-11-26 19:02:30.811586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:04.334 [2024-11-26 19:02:30.811604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.334 [2024-11-26 19:02:30.814611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.334 [2024-11-26 19:02:30.814707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:04.334 BaseBdev2 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.334 BaseBdev3_malloc 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.334 true 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.334 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.334 [2024-11-26 19:02:30.897486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:04.334 [2024-11-26 19:02:30.897689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.335 [2024-11-26 19:02:30.897728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:04.335 [2024-11-26 19:02:30.897748] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.335 [2024-11-26 19:02:30.900872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.335 [2024-11-26 19:02:30.901075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:04.335 BaseBdev3 00:13:04.335 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.335 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:04.335 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:04.335 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.335 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.335 BaseBdev4_malloc 00:13:04.335 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.335 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:04.335 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.335 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.593 true 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.593 [2024-11-26 19:02:30.961582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:04.593 [2024-11-26 19:02:30.961852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.593 [2024-11-26 19:02:30.961891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:04.593 [2024-11-26 19:02:30.961911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.593 [2024-11-26 19:02:30.964996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.593 [2024-11-26 19:02:30.965187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:04.593 BaseBdev4 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.593 [2024-11-26 19:02:30.973888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:04.593 [2024-11-26 19:02:30.976510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:04.593 [2024-11-26 19:02:30.976612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:04.593 [2024-11-26 19:02:30.976719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:04.593 [2024-11-26 19:02:30.977032] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:04.593 [2024-11-26 19:02:30.977055] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:04.593 [2024-11-26 19:02:30.977424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:04.593 [2024-11-26 19:02:30.977696] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:04.593 [2024-11-26 19:02:30.977728] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:04.593 [2024-11-26 19:02:30.978003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.593 19:02:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.593 19:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.593 19:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.593 "name": "raid_bdev1", 00:13:04.593 "uuid": "7e96675b-e499-40b5-b930-583f2b188eca", 00:13:04.593 "strip_size_kb": 0, 00:13:04.593 "state": "online", 00:13:04.593 "raid_level": "raid1", 00:13:04.593 "superblock": true, 00:13:04.594 "num_base_bdevs": 4, 00:13:04.594 "num_base_bdevs_discovered": 4, 00:13:04.594 "num_base_bdevs_operational": 4, 00:13:04.594 "base_bdevs_list": [ 00:13:04.594 { 00:13:04.594 "name": "BaseBdev1", 00:13:04.594 "uuid": "e70294f5-77a7-5cc9-b961-b1930a00a65b", 00:13:04.594 "is_configured": true, 00:13:04.594 "data_offset": 2048, 00:13:04.594 "data_size": 63488 00:13:04.594 }, 00:13:04.594 { 00:13:04.594 "name": "BaseBdev2", 00:13:04.594 "uuid": "18d975d1-d171-5620-862b-0762e6beb7f9", 00:13:04.594 "is_configured": true, 00:13:04.594 "data_offset": 2048, 00:13:04.594 "data_size": 63488 00:13:04.594 }, 00:13:04.594 { 00:13:04.594 "name": "BaseBdev3", 00:13:04.594 "uuid": "39eace59-2515-5421-bffc-2251bf443636", 00:13:04.594 "is_configured": true, 00:13:04.594 "data_offset": 2048, 00:13:04.594 "data_size": 63488 00:13:04.594 }, 00:13:04.594 { 00:13:04.594 "name": "BaseBdev4", 00:13:04.594 "uuid": "24a8b2b1-0504-5835-ab84-2eb7dc44efbe", 00:13:04.594 "is_configured": true, 00:13:04.594 "data_offset": 2048, 00:13:04.594 "data_size": 63488 00:13:04.594 } 00:13:04.594 ] 00:13:04.594 }' 00:13:04.594 19:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.594 19:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.160 19:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:05.160 19:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:05.160 [2024-11-26 19:02:31.619638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.095 "name": "raid_bdev1", 00:13:06.095 "uuid": "7e96675b-e499-40b5-b930-583f2b188eca", 00:13:06.095 "strip_size_kb": 0, 00:13:06.095 "state": "online", 00:13:06.095 "raid_level": "raid1", 00:13:06.095 "superblock": true, 00:13:06.095 "num_base_bdevs": 4, 00:13:06.095 "num_base_bdevs_discovered": 4, 00:13:06.095 "num_base_bdevs_operational": 4, 00:13:06.095 "base_bdevs_list": [ 00:13:06.095 { 00:13:06.095 "name": "BaseBdev1", 00:13:06.095 "uuid": "e70294f5-77a7-5cc9-b961-b1930a00a65b", 00:13:06.095 "is_configured": true, 00:13:06.095 "data_offset": 2048, 00:13:06.095 "data_size": 63488 00:13:06.095 }, 00:13:06.095 { 00:13:06.095 "name": "BaseBdev2", 00:13:06.095 "uuid": "18d975d1-d171-5620-862b-0762e6beb7f9", 00:13:06.095 "is_configured": true, 00:13:06.095 "data_offset": 2048, 00:13:06.095 "data_size": 63488 00:13:06.095 }, 00:13:06.095 { 00:13:06.095 "name": "BaseBdev3", 00:13:06.095 "uuid": "39eace59-2515-5421-bffc-2251bf443636", 00:13:06.095 "is_configured": true, 00:13:06.095 "data_offset": 2048, 00:13:06.095 "data_size": 63488 00:13:06.095 }, 00:13:06.095 { 00:13:06.095 "name": "BaseBdev4", 00:13:06.095 "uuid": "24a8b2b1-0504-5835-ab84-2eb7dc44efbe", 00:13:06.095 "is_configured": true, 00:13:06.095 "data_offset": 2048, 00:13:06.095 "data_size": 63488 00:13:06.095 } 00:13:06.095 ] 00:13:06.095 }' 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.095 19:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.662 19:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:06.662 19:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.662 19:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.662 [2024-11-26 19:02:33.036552] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:06.662 [2024-11-26 19:02:33.036804] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:06.662 { 00:13:06.662 "results": [ 00:13:06.662 { 00:13:06.662 "job": "raid_bdev1", 00:13:06.662 "core_mask": "0x1", 00:13:06.662 "workload": "randrw", 00:13:06.662 "percentage": 50, 00:13:06.662 "status": "finished", 00:13:06.662 "queue_depth": 1, 00:13:06.662 "io_size": 131072, 00:13:06.662 "runtime": 1.414629, 00:13:06.662 "iops": 6490.04085170034, 00:13:06.663 "mibps": 811.2551064625425, 00:13:06.663 "io_failed": 0, 00:13:06.663 "io_timeout": 0, 00:13:06.663 "avg_latency_us": 149.7296448198354, 00:13:06.663 "min_latency_us": 38.63272727272727, 00:13:06.663 "max_latency_us": 2129.92 00:13:06.663 } 00:13:06.663 ], 00:13:06.663 "core_count": 1 00:13:06.663 } 00:13:06.663 [2024-11-26 19:02:33.040426] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:06.663 [2024-11-26 19:02:33.040568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.663 [2024-11-26 19:02:33.040758] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:06.663 [2024-11-26 19:02:33.040778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:06.663 19:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.663 19:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75582 00:13:06.663 19:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75582 ']' 00:13:06.663 19:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75582 00:13:06.663 19:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:06.663 19:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.663 19:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75582 00:13:06.663 killing process with pid 75582 00:13:06.663 19:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:06.663 19:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:06.663 19:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75582' 00:13:06.663 19:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75582 00:13:06.663 [2024-11-26 19:02:33.082418] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:06.663 19:02:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75582 00:13:06.956 [2024-11-26 19:02:33.389600] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:08.332 19:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:08.332 19:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.msGeIcinmm 00:13:08.332 19:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:08.332 19:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:08.332 19:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:08.332 19:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:08.332 19:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:08.332 19:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:08.332 00:13:08.332 real 0m4.997s 00:13:08.332 user 0m6.061s 00:13:08.332 sys 0m0.682s 00:13:08.332 ************************************ 00:13:08.332 END TEST raid_read_error_test 00:13:08.332 ************************************ 00:13:08.332 19:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.332 19:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.332 19:02:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:13:08.332 19:02:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:08.332 19:02:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.332 19:02:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:08.332 ************************************ 00:13:08.332 START TEST raid_write_error_test 00:13:08.332 ************************************ 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.E5Z6zHeZVr 00:13:08.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75734 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75734 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75734 ']' 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.332 19:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.332 [2024-11-26 19:02:34.750010] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:13:08.332 [2024-11-26 19:02:34.750205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75734 ] 00:13:08.332 [2024-11-26 19:02:34.937106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.590 [2024-11-26 19:02:35.082907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.849 [2024-11-26 19:02:35.294177] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.849 [2024-11-26 19:02:35.294271] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.108 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.108 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:09.108 19:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:09.108 19:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:09.108 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.108 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.368 BaseBdev1_malloc 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.368 true 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.368 [2024-11-26 19:02:35.775105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:09.368 [2024-11-26 19:02:35.775194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.368 [2024-11-26 19:02:35.775226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:09.368 [2024-11-26 19:02:35.775245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.368 [2024-11-26 19:02:35.778343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.368 [2024-11-26 19:02:35.778559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:09.368 BaseBdev1 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.368 BaseBdev2_malloc 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.368 true 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.368 [2024-11-26 19:02:35.846770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:09.368 [2024-11-26 19:02:35.846857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.368 [2024-11-26 19:02:35.846883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:09.368 [2024-11-26 19:02:35.846899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.368 [2024-11-26 19:02:35.849983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.368 [2024-11-26 19:02:35.850050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:09.368 BaseBdev2 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.368 BaseBdev3_malloc 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.368 true 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.368 [2024-11-26 19:02:35.920855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:09.368 [2024-11-26 19:02:35.920936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.368 [2024-11-26 19:02:35.921002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:09.368 [2024-11-26 19:02:35.921022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.368 [2024-11-26 19:02:35.924067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.368 [2024-11-26 19:02:35.924133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:09.368 BaseBdev3 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.368 BaseBdev4_malloc 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.368 true 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.368 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.368 [2024-11-26 19:02:35.987216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:09.368 [2024-11-26 19:02:35.987343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.368 [2024-11-26 19:02:35.987373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:09.369 [2024-11-26 19:02:35.987391] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.627 [2024-11-26 19:02:35.990549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.627 [2024-11-26 19:02:35.990603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:09.627 BaseBdev4 00:13:09.627 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.627 19:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:09.627 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.627 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.627 [2024-11-26 19:02:35.995319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.627 [2024-11-26 19:02:35.998037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.627 [2024-11-26 19:02:35.998302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:09.627 [2024-11-26 19:02:35.998420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:09.627 [2024-11-26 19:02:35.998733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:09.627 [2024-11-26 19:02:35.998761] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:09.627 [2024-11-26 19:02:35.999071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:09.627 [2024-11-26 19:02:35.999331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:09.627 [2024-11-26 19:02:35.999358] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:09.627 [2024-11-26 19:02:35.999614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.628 19:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.628 19:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:09.628 19:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.628 19:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.628 19:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.628 19:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.628 19:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.628 19:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.628 19:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.628 19:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.628 19:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.628 19:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.628 19:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.628 19:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.628 19:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.628 19:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.628 19:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.628 "name": "raid_bdev1", 00:13:09.628 "uuid": "92db8d6d-bf01-4494-9950-1335546cbdbf", 00:13:09.628 "strip_size_kb": 0, 00:13:09.628 "state": "online", 00:13:09.628 "raid_level": "raid1", 00:13:09.628 "superblock": true, 00:13:09.628 "num_base_bdevs": 4, 00:13:09.628 "num_base_bdevs_discovered": 4, 00:13:09.628 "num_base_bdevs_operational": 4, 00:13:09.628 "base_bdevs_list": [ 00:13:09.628 { 00:13:09.628 "name": "BaseBdev1", 00:13:09.628 "uuid": "da0579cb-cc4e-55d3-aa33-d3abe27b835e", 00:13:09.628 "is_configured": true, 00:13:09.628 "data_offset": 2048, 00:13:09.628 "data_size": 63488 00:13:09.628 }, 00:13:09.628 { 00:13:09.628 "name": "BaseBdev2", 00:13:09.628 "uuid": "8dbb1322-a65b-5590-af21-0597c807f028", 00:13:09.628 "is_configured": true, 00:13:09.628 "data_offset": 2048, 00:13:09.628 "data_size": 63488 00:13:09.628 }, 00:13:09.628 { 00:13:09.628 "name": "BaseBdev3", 00:13:09.628 "uuid": "1625bf5e-cf6b-5cf2-b1d8-1217dc1c4ba3", 00:13:09.628 "is_configured": true, 00:13:09.628 "data_offset": 2048, 00:13:09.628 "data_size": 63488 00:13:09.628 }, 00:13:09.628 { 00:13:09.628 "name": "BaseBdev4", 00:13:09.628 "uuid": "46f9c596-833f-50ed-8b2c-8fccfff13a8f", 00:13:09.628 "is_configured": true, 00:13:09.628 "data_offset": 2048, 00:13:09.628 "data_size": 63488 00:13:09.628 } 00:13:09.628 ] 00:13:09.628 }' 00:13:09.628 19:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.628 19:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.194 19:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:10.194 19:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:10.194 [2024-11-26 19:02:36.665213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.129 [2024-11-26 19:02:37.535004] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:11.129 [2024-11-26 19:02:37.535093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:11.129 [2024-11-26 19:02:37.535422] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.129 "name": "raid_bdev1", 00:13:11.129 "uuid": "92db8d6d-bf01-4494-9950-1335546cbdbf", 00:13:11.129 "strip_size_kb": 0, 00:13:11.129 "state": "online", 00:13:11.129 "raid_level": "raid1", 00:13:11.129 "superblock": true, 00:13:11.129 "num_base_bdevs": 4, 00:13:11.129 "num_base_bdevs_discovered": 3, 00:13:11.129 "num_base_bdevs_operational": 3, 00:13:11.129 "base_bdevs_list": [ 00:13:11.129 { 00:13:11.129 "name": null, 00:13:11.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.129 "is_configured": false, 00:13:11.129 "data_offset": 0, 00:13:11.129 "data_size": 63488 00:13:11.129 }, 00:13:11.129 { 00:13:11.129 "name": "BaseBdev2", 00:13:11.129 "uuid": "8dbb1322-a65b-5590-af21-0597c807f028", 00:13:11.129 "is_configured": true, 00:13:11.129 "data_offset": 2048, 00:13:11.129 "data_size": 63488 00:13:11.129 }, 00:13:11.129 { 00:13:11.129 "name": "BaseBdev3", 00:13:11.129 "uuid": "1625bf5e-cf6b-5cf2-b1d8-1217dc1c4ba3", 00:13:11.129 "is_configured": true, 00:13:11.129 "data_offset": 2048, 00:13:11.129 "data_size": 63488 00:13:11.129 }, 00:13:11.129 { 00:13:11.129 "name": "BaseBdev4", 00:13:11.129 "uuid": "46f9c596-833f-50ed-8b2c-8fccfff13a8f", 00:13:11.129 "is_configured": true, 00:13:11.129 "data_offset": 2048, 00:13:11.129 "data_size": 63488 00:13:11.129 } 00:13:11.129 ] 00:13:11.129 }' 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.129 19:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.697 19:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:11.697 19:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.697 19:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.697 [2024-11-26 19:02:38.057074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:11.697 [2024-11-26 19:02:38.057114] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.697 [2024-11-26 19:02:38.060532] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.697 [2024-11-26 19:02:38.060595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.697 [2024-11-26 19:02:38.060765] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.697 [2024-11-26 19:02:38.060785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:11.697 { 00:13:11.697 "results": [ 00:13:11.697 { 00:13:11.697 "job": "raid_bdev1", 00:13:11.697 "core_mask": "0x1", 00:13:11.697 "workload": "randrw", 00:13:11.697 "percentage": 50, 00:13:11.697 "status": "finished", 00:13:11.697 "queue_depth": 1, 00:13:11.697 "io_size": 131072, 00:13:11.697 "runtime": 1.389148, 00:13:11.697 "iops": 7106.514208709224, 00:13:11.697 "mibps": 888.314276088653, 00:13:11.697 "io_failed": 0, 00:13:11.697 "io_timeout": 0, 00:13:11.697 "avg_latency_us": 136.34656843966408, 00:13:11.697 "min_latency_us": 40.96, 00:13:11.697 "max_latency_us": 2025.658181818182 00:13:11.697 } 00:13:11.697 ], 00:13:11.697 "core_count": 1 00:13:11.697 } 00:13:11.697 19:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.697 19:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75734 00:13:11.697 19:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75734 ']' 00:13:11.697 19:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75734 00:13:11.697 19:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:11.697 19:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.697 19:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75734 00:13:11.697 killing process with pid 75734 00:13:11.697 19:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.697 19:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.697 19:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75734' 00:13:11.697 19:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75734 00:13:11.697 [2024-11-26 19:02:38.096360] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:11.697 19:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75734 00:13:11.956 [2024-11-26 19:02:38.390557] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:13.331 19:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.E5Z6zHeZVr 00:13:13.331 19:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:13.332 19:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:13.332 19:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:13.332 19:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:13.332 19:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:13.332 19:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:13.332 19:02:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:13.332 00:13:13.332 real 0m4.942s 00:13:13.332 user 0m6.009s 00:13:13.332 sys 0m0.668s 00:13:13.332 19:02:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.332 ************************************ 00:13:13.332 END TEST raid_write_error_test 00:13:13.332 ************************************ 00:13:13.332 19:02:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.332 19:02:39 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:13:13.332 19:02:39 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:13.332 19:02:39 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:13:13.332 19:02:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:13.332 19:02:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.332 19:02:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:13.332 ************************************ 00:13:13.332 START TEST raid_rebuild_test 00:13:13.332 ************************************ 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75877 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75877 00:13:13.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75877 ']' 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:13.332 19:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.332 [2024-11-26 19:02:39.724910] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:13:13.332 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:13.332 Zero copy mechanism will not be used. 00:13:13.332 [2024-11-26 19:02:39.725402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75877 ] 00:13:13.332 [2024-11-26 19:02:39.897174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.595 [2024-11-26 19:02:40.048669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.868 [2024-11-26 19:02:40.265526] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.868 [2024-11-26 19:02:40.265608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:14.127 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:14.127 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.387 BaseBdev1_malloc 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.387 [2024-11-26 19:02:40.798748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:14.387 [2024-11-26 19:02:40.798840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.387 [2024-11-26 19:02:40.798874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:14.387 [2024-11-26 19:02:40.798893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.387 [2024-11-26 19:02:40.801794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.387 [2024-11-26 19:02:40.801861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:14.387 BaseBdev1 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.387 BaseBdev2_malloc 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.387 [2024-11-26 19:02:40.851705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:14.387 [2024-11-26 19:02:40.851801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.387 [2024-11-26 19:02:40.851836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:14.387 [2024-11-26 19:02:40.851855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.387 [2024-11-26 19:02:40.854785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.387 [2024-11-26 19:02:40.854848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:14.387 BaseBdev2 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.387 spare_malloc 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.387 spare_delay 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.387 [2024-11-26 19:02:40.922447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:14.387 [2024-11-26 19:02:40.922540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.387 [2024-11-26 19:02:40.922571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:14.387 [2024-11-26 19:02:40.922589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.387 [2024-11-26 19:02:40.925528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.387 [2024-11-26 19:02:40.925596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:14.387 spare 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.387 [2024-11-26 19:02:40.930589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:14.387 [2024-11-26 19:02:40.933208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:14.387 [2024-11-26 19:02:40.933403] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:14.387 [2024-11-26 19:02:40.933427] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:14.387 [2024-11-26 19:02:40.933786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:14.387 [2024-11-26 19:02:40.934042] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:14.387 [2024-11-26 19:02:40.934076] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:14.387 [2024-11-26 19:02:40.934259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.387 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.388 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.388 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.388 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.388 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.388 "name": "raid_bdev1", 00:13:14.388 "uuid": "0cb3ab72-c43f-4bd9-a908-aaa3b9bbbfa0", 00:13:14.388 "strip_size_kb": 0, 00:13:14.388 "state": "online", 00:13:14.388 "raid_level": "raid1", 00:13:14.388 "superblock": false, 00:13:14.388 "num_base_bdevs": 2, 00:13:14.388 "num_base_bdevs_discovered": 2, 00:13:14.388 "num_base_bdevs_operational": 2, 00:13:14.388 "base_bdevs_list": [ 00:13:14.388 { 00:13:14.388 "name": "BaseBdev1", 00:13:14.388 "uuid": "f86d8083-d132-5d8b-9bb7-b2966d9edab5", 00:13:14.388 "is_configured": true, 00:13:14.388 "data_offset": 0, 00:13:14.388 "data_size": 65536 00:13:14.388 }, 00:13:14.388 { 00:13:14.388 "name": "BaseBdev2", 00:13:14.388 "uuid": "bdcc2a1e-6055-5652-8a70-3c6b59a02510", 00:13:14.388 "is_configured": true, 00:13:14.388 "data_offset": 0, 00:13:14.388 "data_size": 65536 00:13:14.388 } 00:13:14.388 ] 00:13:14.388 }' 00:13:14.388 19:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.388 19:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.953 19:02:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:14.953 19:02:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:14.953 19:02:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.953 19:02:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.953 [2024-11-26 19:02:41.459090] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.953 19:02:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.953 19:02:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:14.953 19:02:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:14.953 19:02:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.953 19:02:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.953 19:02:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.953 19:02:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.953 19:02:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:14.953 19:02:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:14.953 19:02:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:14.953 19:02:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:14.954 19:02:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:14.954 19:02:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:14.954 19:02:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:14.954 19:02:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:14.954 19:02:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:14.954 19:02:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:14.954 19:02:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:14.954 19:02:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:14.954 19:02:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:14.954 19:02:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:15.522 [2024-11-26 19:02:41.846891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:15.522 /dev/nbd0 00:13:15.522 19:02:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:15.522 19:02:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:15.522 19:02:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:15.522 19:02:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:15.522 19:02:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:15.522 19:02:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:15.522 19:02:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:15.522 19:02:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:15.522 19:02:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:15.522 19:02:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:15.522 19:02:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:15.522 1+0 records in 00:13:15.522 1+0 records out 00:13:15.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004476 s, 9.2 MB/s 00:13:15.522 19:02:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.522 19:02:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:15.523 19:02:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.523 19:02:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:15.523 19:02:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:15.523 19:02:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:15.523 19:02:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:15.523 19:02:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:15.523 19:02:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:15.523 19:02:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:22.085 65536+0 records in 00:13:22.085 65536+0 records out 00:13:22.085 33554432 bytes (34 MB, 32 MiB) copied, 6.00767 s, 5.6 MB/s 00:13:22.085 19:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:22.085 19:02:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:22.085 19:02:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:22.085 19:02:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:22.085 19:02:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:22.085 19:02:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.085 19:02:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:22.085 [2024-11-26 19:02:48.231219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.085 [2024-11-26 19:02:48.244490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.085 "name": "raid_bdev1", 00:13:22.085 "uuid": "0cb3ab72-c43f-4bd9-a908-aaa3b9bbbfa0", 00:13:22.085 "strip_size_kb": 0, 00:13:22.085 "state": "online", 00:13:22.085 "raid_level": "raid1", 00:13:22.085 "superblock": false, 00:13:22.085 "num_base_bdevs": 2, 00:13:22.085 "num_base_bdevs_discovered": 1, 00:13:22.085 "num_base_bdevs_operational": 1, 00:13:22.085 "base_bdevs_list": [ 00:13:22.085 { 00:13:22.085 "name": null, 00:13:22.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.085 "is_configured": false, 00:13:22.085 "data_offset": 0, 00:13:22.085 "data_size": 65536 00:13:22.085 }, 00:13:22.085 { 00:13:22.085 "name": "BaseBdev2", 00:13:22.085 "uuid": "bdcc2a1e-6055-5652-8a70-3c6b59a02510", 00:13:22.085 "is_configured": true, 00:13:22.085 "data_offset": 0, 00:13:22.085 "data_size": 65536 00:13:22.085 } 00:13:22.085 ] 00:13:22.085 }' 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.085 19:02:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.344 19:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:22.344 19:02:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.344 19:02:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.344 [2024-11-26 19:02:48.764732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:22.344 [2024-11-26 19:02:48.782487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:13:22.344 19:02:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.344 19:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:22.344 [2024-11-26 19:02:48.785395] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:23.278 19:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.278 19:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.278 19:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.278 19:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.278 19:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.278 19:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.278 19:02:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.278 19:02:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.278 19:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.278 19:02:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.278 19:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.278 "name": "raid_bdev1", 00:13:23.278 "uuid": "0cb3ab72-c43f-4bd9-a908-aaa3b9bbbfa0", 00:13:23.278 "strip_size_kb": 0, 00:13:23.278 "state": "online", 00:13:23.278 "raid_level": "raid1", 00:13:23.278 "superblock": false, 00:13:23.278 "num_base_bdevs": 2, 00:13:23.278 "num_base_bdevs_discovered": 2, 00:13:23.278 "num_base_bdevs_operational": 2, 00:13:23.278 "process": { 00:13:23.278 "type": "rebuild", 00:13:23.278 "target": "spare", 00:13:23.278 "progress": { 00:13:23.278 "blocks": 18432, 00:13:23.278 "percent": 28 00:13:23.278 } 00:13:23.278 }, 00:13:23.278 "base_bdevs_list": [ 00:13:23.278 { 00:13:23.278 "name": "spare", 00:13:23.278 "uuid": "06cbe042-ceea-598e-8c98-43ea9206154f", 00:13:23.278 "is_configured": true, 00:13:23.278 "data_offset": 0, 00:13:23.278 "data_size": 65536 00:13:23.278 }, 00:13:23.278 { 00:13:23.278 "name": "BaseBdev2", 00:13:23.278 "uuid": "bdcc2a1e-6055-5652-8a70-3c6b59a02510", 00:13:23.278 "is_configured": true, 00:13:23.278 "data_offset": 0, 00:13:23.278 "data_size": 65536 00:13:23.278 } 00:13:23.278 ] 00:13:23.278 }' 00:13:23.278 19:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.278 19:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.278 19:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.537 19:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.537 19:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:23.537 19:02:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.537 19:02:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.538 [2024-11-26 19:02:49.947084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:23.538 [2024-11-26 19:02:49.997461] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:23.538 [2024-11-26 19:02:49.997921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.538 [2024-11-26 19:02:49.997951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:23.538 [2024-11-26 19:02:49.997973] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:23.538 19:02:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.538 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:23.538 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.538 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.538 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.538 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.538 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:23.538 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.538 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.538 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.538 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.538 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.538 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.538 19:02:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.538 19:02:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.538 19:02:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.538 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.538 "name": "raid_bdev1", 00:13:23.538 "uuid": "0cb3ab72-c43f-4bd9-a908-aaa3b9bbbfa0", 00:13:23.538 "strip_size_kb": 0, 00:13:23.538 "state": "online", 00:13:23.538 "raid_level": "raid1", 00:13:23.538 "superblock": false, 00:13:23.538 "num_base_bdevs": 2, 00:13:23.538 "num_base_bdevs_discovered": 1, 00:13:23.538 "num_base_bdevs_operational": 1, 00:13:23.538 "base_bdevs_list": [ 00:13:23.538 { 00:13:23.538 "name": null, 00:13:23.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.538 "is_configured": false, 00:13:23.538 "data_offset": 0, 00:13:23.538 "data_size": 65536 00:13:23.538 }, 00:13:23.538 { 00:13:23.538 "name": "BaseBdev2", 00:13:23.538 "uuid": "bdcc2a1e-6055-5652-8a70-3c6b59a02510", 00:13:23.538 "is_configured": true, 00:13:23.538 "data_offset": 0, 00:13:23.538 "data_size": 65536 00:13:23.538 } 00:13:23.538 ] 00:13:23.538 }' 00:13:23.538 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.538 19:02:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.107 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:24.107 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.107 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:24.107 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:24.107 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.107 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.107 19:02:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.107 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.107 19:02:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.107 19:02:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.107 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.107 "name": "raid_bdev1", 00:13:24.107 "uuid": "0cb3ab72-c43f-4bd9-a908-aaa3b9bbbfa0", 00:13:24.107 "strip_size_kb": 0, 00:13:24.107 "state": "online", 00:13:24.107 "raid_level": "raid1", 00:13:24.107 "superblock": false, 00:13:24.107 "num_base_bdevs": 2, 00:13:24.107 "num_base_bdevs_discovered": 1, 00:13:24.107 "num_base_bdevs_operational": 1, 00:13:24.107 "base_bdevs_list": [ 00:13:24.107 { 00:13:24.107 "name": null, 00:13:24.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.107 "is_configured": false, 00:13:24.107 "data_offset": 0, 00:13:24.107 "data_size": 65536 00:13:24.107 }, 00:13:24.107 { 00:13:24.107 "name": "BaseBdev2", 00:13:24.107 "uuid": "bdcc2a1e-6055-5652-8a70-3c6b59a02510", 00:13:24.107 "is_configured": true, 00:13:24.107 "data_offset": 0, 00:13:24.107 "data_size": 65536 00:13:24.107 } 00:13:24.107 ] 00:13:24.107 }' 00:13:24.107 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.107 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:24.107 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.107 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:24.107 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:24.107 19:02:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.107 19:02:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.107 [2024-11-26 19:02:50.722759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:24.366 [2024-11-26 19:02:50.740354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:13:24.366 19:02:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.366 19:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:24.366 [2024-11-26 19:02:50.743060] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:25.304 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.304 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.304 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.304 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.304 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.304 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.304 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.304 19:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.304 19:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.304 19:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.304 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.305 "name": "raid_bdev1", 00:13:25.305 "uuid": "0cb3ab72-c43f-4bd9-a908-aaa3b9bbbfa0", 00:13:25.305 "strip_size_kb": 0, 00:13:25.305 "state": "online", 00:13:25.305 "raid_level": "raid1", 00:13:25.305 "superblock": false, 00:13:25.305 "num_base_bdevs": 2, 00:13:25.305 "num_base_bdevs_discovered": 2, 00:13:25.305 "num_base_bdevs_operational": 2, 00:13:25.305 "process": { 00:13:25.305 "type": "rebuild", 00:13:25.305 "target": "spare", 00:13:25.305 "progress": { 00:13:25.305 "blocks": 20480, 00:13:25.305 "percent": 31 00:13:25.305 } 00:13:25.305 }, 00:13:25.305 "base_bdevs_list": [ 00:13:25.305 { 00:13:25.305 "name": "spare", 00:13:25.305 "uuid": "06cbe042-ceea-598e-8c98-43ea9206154f", 00:13:25.305 "is_configured": true, 00:13:25.305 "data_offset": 0, 00:13:25.305 "data_size": 65536 00:13:25.305 }, 00:13:25.305 { 00:13:25.305 "name": "BaseBdev2", 00:13:25.305 "uuid": "bdcc2a1e-6055-5652-8a70-3c6b59a02510", 00:13:25.305 "is_configured": true, 00:13:25.305 "data_offset": 0, 00:13:25.305 "data_size": 65536 00:13:25.305 } 00:13:25.305 ] 00:13:25.305 }' 00:13:25.305 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.305 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.305 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.305 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.305 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:25.305 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:25.305 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:25.305 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:25.305 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=409 00:13:25.305 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:25.305 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.305 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.305 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.305 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.305 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.305 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.305 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.305 19:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.305 19:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.563 19:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.563 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.563 "name": "raid_bdev1", 00:13:25.563 "uuid": "0cb3ab72-c43f-4bd9-a908-aaa3b9bbbfa0", 00:13:25.563 "strip_size_kb": 0, 00:13:25.563 "state": "online", 00:13:25.563 "raid_level": "raid1", 00:13:25.563 "superblock": false, 00:13:25.563 "num_base_bdevs": 2, 00:13:25.563 "num_base_bdevs_discovered": 2, 00:13:25.563 "num_base_bdevs_operational": 2, 00:13:25.563 "process": { 00:13:25.563 "type": "rebuild", 00:13:25.563 "target": "spare", 00:13:25.563 "progress": { 00:13:25.563 "blocks": 22528, 00:13:25.563 "percent": 34 00:13:25.563 } 00:13:25.563 }, 00:13:25.564 "base_bdevs_list": [ 00:13:25.564 { 00:13:25.564 "name": "spare", 00:13:25.564 "uuid": "06cbe042-ceea-598e-8c98-43ea9206154f", 00:13:25.564 "is_configured": true, 00:13:25.564 "data_offset": 0, 00:13:25.564 "data_size": 65536 00:13:25.564 }, 00:13:25.564 { 00:13:25.564 "name": "BaseBdev2", 00:13:25.564 "uuid": "bdcc2a1e-6055-5652-8a70-3c6b59a02510", 00:13:25.564 "is_configured": true, 00:13:25.564 "data_offset": 0, 00:13:25.564 "data_size": 65536 00:13:25.564 } 00:13:25.564 ] 00:13:25.564 }' 00:13:25.564 19:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.564 19:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.564 19:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.564 19:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.564 19:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:26.500 19:02:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:26.500 19:02:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.500 19:02:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.500 19:02:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.500 19:02:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.500 19:02:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.500 19:02:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.500 19:02:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.500 19:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.500 19:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.500 19:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.500 19:02:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.500 "name": "raid_bdev1", 00:13:26.500 "uuid": "0cb3ab72-c43f-4bd9-a908-aaa3b9bbbfa0", 00:13:26.500 "strip_size_kb": 0, 00:13:26.500 "state": "online", 00:13:26.500 "raid_level": "raid1", 00:13:26.500 "superblock": false, 00:13:26.500 "num_base_bdevs": 2, 00:13:26.500 "num_base_bdevs_discovered": 2, 00:13:26.500 "num_base_bdevs_operational": 2, 00:13:26.500 "process": { 00:13:26.500 "type": "rebuild", 00:13:26.500 "target": "spare", 00:13:26.500 "progress": { 00:13:26.500 "blocks": 47104, 00:13:26.500 "percent": 71 00:13:26.500 } 00:13:26.500 }, 00:13:26.500 "base_bdevs_list": [ 00:13:26.500 { 00:13:26.500 "name": "spare", 00:13:26.500 "uuid": "06cbe042-ceea-598e-8c98-43ea9206154f", 00:13:26.500 "is_configured": true, 00:13:26.500 "data_offset": 0, 00:13:26.500 "data_size": 65536 00:13:26.500 }, 00:13:26.500 { 00:13:26.500 "name": "BaseBdev2", 00:13:26.500 "uuid": "bdcc2a1e-6055-5652-8a70-3c6b59a02510", 00:13:26.500 "is_configured": true, 00:13:26.500 "data_offset": 0, 00:13:26.500 "data_size": 65536 00:13:26.500 } 00:13:26.500 ] 00:13:26.500 }' 00:13:26.758 19:02:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.758 19:02:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.758 19:02:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.758 19:02:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.758 19:02:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:27.694 [2024-11-26 19:02:53.975080] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:27.694 [2024-11-26 19:02:53.975187] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:27.694 [2024-11-26 19:02:53.975274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.694 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.694 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.694 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.694 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.694 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.694 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.694 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.694 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.694 19:02:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.694 19:02:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.694 19:02:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.694 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.694 "name": "raid_bdev1", 00:13:27.694 "uuid": "0cb3ab72-c43f-4bd9-a908-aaa3b9bbbfa0", 00:13:27.694 "strip_size_kb": 0, 00:13:27.694 "state": "online", 00:13:27.694 "raid_level": "raid1", 00:13:27.694 "superblock": false, 00:13:27.694 "num_base_bdevs": 2, 00:13:27.694 "num_base_bdevs_discovered": 2, 00:13:27.694 "num_base_bdevs_operational": 2, 00:13:27.694 "base_bdevs_list": [ 00:13:27.694 { 00:13:27.694 "name": "spare", 00:13:27.694 "uuid": "06cbe042-ceea-598e-8c98-43ea9206154f", 00:13:27.694 "is_configured": true, 00:13:27.694 "data_offset": 0, 00:13:27.694 "data_size": 65536 00:13:27.694 }, 00:13:27.694 { 00:13:27.694 "name": "BaseBdev2", 00:13:27.694 "uuid": "bdcc2a1e-6055-5652-8a70-3c6b59a02510", 00:13:27.694 "is_configured": true, 00:13:27.694 "data_offset": 0, 00:13:27.694 "data_size": 65536 00:13:27.694 } 00:13:27.694 ] 00:13:27.694 }' 00:13:27.694 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.953 "name": "raid_bdev1", 00:13:27.953 "uuid": "0cb3ab72-c43f-4bd9-a908-aaa3b9bbbfa0", 00:13:27.953 "strip_size_kb": 0, 00:13:27.953 "state": "online", 00:13:27.953 "raid_level": "raid1", 00:13:27.953 "superblock": false, 00:13:27.953 "num_base_bdevs": 2, 00:13:27.953 "num_base_bdevs_discovered": 2, 00:13:27.953 "num_base_bdevs_operational": 2, 00:13:27.953 "base_bdevs_list": [ 00:13:27.953 { 00:13:27.953 "name": "spare", 00:13:27.953 "uuid": "06cbe042-ceea-598e-8c98-43ea9206154f", 00:13:27.953 "is_configured": true, 00:13:27.953 "data_offset": 0, 00:13:27.953 "data_size": 65536 00:13:27.953 }, 00:13:27.953 { 00:13:27.953 "name": "BaseBdev2", 00:13:27.953 "uuid": "bdcc2a1e-6055-5652-8a70-3c6b59a02510", 00:13:27.953 "is_configured": true, 00:13:27.953 "data_offset": 0, 00:13:27.953 "data_size": 65536 00:13:27.953 } 00:13:27.953 ] 00:13:27.953 }' 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.953 19:02:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.212 19:02:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.212 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.212 "name": "raid_bdev1", 00:13:28.212 "uuid": "0cb3ab72-c43f-4bd9-a908-aaa3b9bbbfa0", 00:13:28.212 "strip_size_kb": 0, 00:13:28.212 "state": "online", 00:13:28.212 "raid_level": "raid1", 00:13:28.212 "superblock": false, 00:13:28.212 "num_base_bdevs": 2, 00:13:28.212 "num_base_bdevs_discovered": 2, 00:13:28.212 "num_base_bdevs_operational": 2, 00:13:28.212 "base_bdevs_list": [ 00:13:28.212 { 00:13:28.212 "name": "spare", 00:13:28.212 "uuid": "06cbe042-ceea-598e-8c98-43ea9206154f", 00:13:28.212 "is_configured": true, 00:13:28.212 "data_offset": 0, 00:13:28.212 "data_size": 65536 00:13:28.212 }, 00:13:28.212 { 00:13:28.212 "name": "BaseBdev2", 00:13:28.212 "uuid": "bdcc2a1e-6055-5652-8a70-3c6b59a02510", 00:13:28.212 "is_configured": true, 00:13:28.212 "data_offset": 0, 00:13:28.212 "data_size": 65536 00:13:28.212 } 00:13:28.212 ] 00:13:28.212 }' 00:13:28.212 19:02:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.212 19:02:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.471 19:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:28.471 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.471 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.471 [2024-11-26 19:02:55.070869] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:28.471 [2024-11-26 19:02:55.071089] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.471 [2024-11-26 19:02:55.071229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.471 [2024-11-26 19:02:55.071379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:28.471 [2024-11-26 19:02:55.071400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:28.471 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.471 19:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.471 19:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:28.471 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.471 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.471 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.731 19:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:28.731 19:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:28.731 19:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:28.731 19:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:28.731 19:02:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:28.731 19:02:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:28.731 19:02:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:28.731 19:02:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:28.731 19:02:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:28.731 19:02:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:28.731 19:02:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:28.731 19:02:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:28.731 19:02:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:28.989 /dev/nbd0 00:13:28.989 19:02:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:28.989 19:02:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:28.989 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:28.989 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:28.989 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:28.989 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:28.989 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:28.989 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:28.989 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:28.989 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:28.990 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.990 1+0 records in 00:13:28.990 1+0 records out 00:13:28.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302788 s, 13.5 MB/s 00:13:28.990 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.990 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:28.990 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.990 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:28.990 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:28.990 19:02:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.990 19:02:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:28.990 19:02:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:29.248 /dev/nbd1 00:13:29.248 19:02:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:29.248 19:02:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:29.248 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:29.248 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:29.248 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:29.248 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:29.248 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:29.248 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:29.248 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:29.248 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:29.248 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.248 1+0 records in 00:13:29.248 1+0 records out 00:13:29.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370321 s, 11.1 MB/s 00:13:29.248 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.248 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:29.248 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.248 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:29.248 19:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:29.248 19:02:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:29.248 19:02:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:29.249 19:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:29.507 19:02:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:29.507 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.507 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:29.507 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:29.507 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:29.507 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.507 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:29.766 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:29.766 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:29.766 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:29.766 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.766 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.766 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:29.766 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:29.766 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.766 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.766 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:30.333 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:30.333 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:30.333 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:30.333 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.333 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.333 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:30.333 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:30.333 19:02:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.333 19:02:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:30.333 19:02:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75877 00:13:30.333 19:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75877 ']' 00:13:30.333 19:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75877 00:13:30.333 19:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:30.333 19:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.333 19:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75877 00:13:30.333 19:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:30.333 19:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:30.333 killing process with pid 75877 00:13:30.333 19:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75877' 00:13:30.333 19:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75877 00:13:30.333 Received shutdown signal, test time was about 60.000000 seconds 00:13:30.333 00:13:30.333 Latency(us) 00:13:30.333 [2024-11-26T19:02:56.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.333 [2024-11-26T19:02:56.956Z] =================================================================================================================== 00:13:30.333 [2024-11-26T19:02:56.956Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:30.333 [2024-11-26 19:02:56.699061] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:30.333 19:02:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75877 00:13:30.593 [2024-11-26 19:02:56.971213] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:31.530 19:02:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:31.530 00:13:31.530 real 0m18.503s 00:13:31.530 user 0m21.449s 00:13:31.530 sys 0m3.426s 00:13:31.530 ************************************ 00:13:31.530 END TEST raid_rebuild_test 00:13:31.530 ************************************ 00:13:31.530 19:02:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.530 19:02:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.789 19:02:58 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:13:31.789 19:02:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:31.789 19:02:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.789 19:02:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:31.789 ************************************ 00:13:31.789 START TEST raid_rebuild_test_sb 00:13:31.789 ************************************ 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76325 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76325 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76325 ']' 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:31.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:31.789 19:02:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.789 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:31.789 Zero copy mechanism will not be used. 00:13:31.789 [2024-11-26 19:02:58.288623] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:13:31.789 [2024-11-26 19:02:58.288769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76325 ] 00:13:32.048 [2024-11-26 19:02:58.464138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.048 [2024-11-26 19:02:58.616016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.307 [2024-11-26 19:02:58.836396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.307 [2024-11-26 19:02:58.836476] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.875 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:32.875 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:32.875 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:32.875 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:32.875 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.875 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.875 BaseBdev1_malloc 00:13:32.875 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.875 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:32.875 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.875 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.875 [2024-11-26 19:02:59.272484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:32.875 [2024-11-26 19:02:59.272598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.875 [2024-11-26 19:02:59.272633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:32.875 [2024-11-26 19:02:59.272654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.875 [2024-11-26 19:02:59.275702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.875 [2024-11-26 19:02:59.275770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:32.875 BaseBdev1 00:13:32.875 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.875 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:32.875 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:32.875 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.875 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.875 BaseBdev2_malloc 00:13:32.875 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.875 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:32.875 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.876 [2024-11-26 19:02:59.330794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:32.876 [2024-11-26 19:02:59.330912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.876 [2024-11-26 19:02:59.330949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:32.876 [2024-11-26 19:02:59.330968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.876 [2024-11-26 19:02:59.333922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.876 [2024-11-26 19:02:59.334006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:32.876 BaseBdev2 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.876 spare_malloc 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.876 spare_delay 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.876 [2024-11-26 19:02:59.412754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:32.876 [2024-11-26 19:02:59.412864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.876 [2024-11-26 19:02:59.412896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:32.876 [2024-11-26 19:02:59.412914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.876 [2024-11-26 19:02:59.415920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.876 [2024-11-26 19:02:59.415990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:32.876 spare 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.876 [2024-11-26 19:02:59.420884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:32.876 [2024-11-26 19:02:59.423486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:32.876 [2024-11-26 19:02:59.423776] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:32.876 [2024-11-26 19:02:59.423813] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:32.876 [2024-11-26 19:02:59.424128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:32.876 [2024-11-26 19:02:59.424389] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:32.876 [2024-11-26 19:02:59.424417] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:32.876 [2024-11-26 19:02:59.424609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.876 "name": "raid_bdev1", 00:13:32.876 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:32.876 "strip_size_kb": 0, 00:13:32.876 "state": "online", 00:13:32.876 "raid_level": "raid1", 00:13:32.876 "superblock": true, 00:13:32.876 "num_base_bdevs": 2, 00:13:32.876 "num_base_bdevs_discovered": 2, 00:13:32.876 "num_base_bdevs_operational": 2, 00:13:32.876 "base_bdevs_list": [ 00:13:32.876 { 00:13:32.876 "name": "BaseBdev1", 00:13:32.876 "uuid": "14a1b78d-7324-5223-a9e7-96e356cdcdfc", 00:13:32.876 "is_configured": true, 00:13:32.876 "data_offset": 2048, 00:13:32.876 "data_size": 63488 00:13:32.876 }, 00:13:32.876 { 00:13:32.876 "name": "BaseBdev2", 00:13:32.876 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:32.876 "is_configured": true, 00:13:32.876 "data_offset": 2048, 00:13:32.876 "data_size": 63488 00:13:32.876 } 00:13:32.876 ] 00:13:32.876 }' 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.876 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.443 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:33.443 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.443 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:33.443 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.443 [2024-11-26 19:02:59.929484] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:33.443 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.443 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:33.443 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.443 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.443 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.443 19:02:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:33.443 19:02:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.443 19:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:33.443 19:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:33.443 19:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:33.443 19:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:33.443 19:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:33.443 19:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:33.443 19:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:33.443 19:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:33.443 19:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:33.443 19:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:33.443 19:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:33.443 19:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:33.443 19:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:33.443 19:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:33.702 [2024-11-26 19:03:00.269287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:33.702 /dev/nbd0 00:13:33.702 19:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:33.702 19:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:33.702 19:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:33.702 19:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:33.702 19:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:33.702 19:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:33.702 19:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:33.702 19:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:33.702 19:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:33.702 19:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:33.702 19:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.702 1+0 records in 00:13:33.702 1+0 records out 00:13:33.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246726 s, 16.6 MB/s 00:13:33.702 19:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.702 19:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:33.702 19:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.961 19:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:33.961 19:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:33.961 19:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.961 19:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:33.961 19:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:33.961 19:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:33.961 19:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:40.520 63488+0 records in 00:13:40.520 63488+0 records out 00:13:40.520 32505856 bytes (33 MB, 31 MiB) copied, 5.79959 s, 5.6 MB/s 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:40.520 [2024-11-26 19:03:06.467889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.520 [2024-11-26 19:03:06.481063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.520 19:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.521 19:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.521 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.521 19:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.521 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.521 "name": "raid_bdev1", 00:13:40.521 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:40.521 "strip_size_kb": 0, 00:13:40.521 "state": "online", 00:13:40.521 "raid_level": "raid1", 00:13:40.521 "superblock": true, 00:13:40.521 "num_base_bdevs": 2, 00:13:40.521 "num_base_bdevs_discovered": 1, 00:13:40.521 "num_base_bdevs_operational": 1, 00:13:40.521 "base_bdevs_list": [ 00:13:40.521 { 00:13:40.521 "name": null, 00:13:40.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.521 "is_configured": false, 00:13:40.521 "data_offset": 0, 00:13:40.521 "data_size": 63488 00:13:40.521 }, 00:13:40.521 { 00:13:40.521 "name": "BaseBdev2", 00:13:40.521 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:40.521 "is_configured": true, 00:13:40.521 "data_offset": 2048, 00:13:40.521 "data_size": 63488 00:13:40.521 } 00:13:40.521 ] 00:13:40.521 }' 00:13:40.521 19:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.521 19:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.521 19:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:40.521 19:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.521 19:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.521 [2024-11-26 19:03:07.025469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:40.521 [2024-11-26 19:03:07.044788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:40.521 19:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.521 19:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:40.521 [2024-11-26 19:03:07.047973] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:41.456 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.456 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.456 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.456 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.456 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.456 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.456 19:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.456 19:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.456 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.456 19:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.713 "name": "raid_bdev1", 00:13:41.713 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:41.713 "strip_size_kb": 0, 00:13:41.713 "state": "online", 00:13:41.713 "raid_level": "raid1", 00:13:41.713 "superblock": true, 00:13:41.713 "num_base_bdevs": 2, 00:13:41.713 "num_base_bdevs_discovered": 2, 00:13:41.713 "num_base_bdevs_operational": 2, 00:13:41.713 "process": { 00:13:41.713 "type": "rebuild", 00:13:41.713 "target": "spare", 00:13:41.713 "progress": { 00:13:41.713 "blocks": 20480, 00:13:41.713 "percent": 32 00:13:41.713 } 00:13:41.713 }, 00:13:41.713 "base_bdevs_list": [ 00:13:41.713 { 00:13:41.713 "name": "spare", 00:13:41.713 "uuid": "2323c108-c648-55c2-98e9-a5f1180a485f", 00:13:41.713 "is_configured": true, 00:13:41.713 "data_offset": 2048, 00:13:41.713 "data_size": 63488 00:13:41.713 }, 00:13:41.713 { 00:13:41.713 "name": "BaseBdev2", 00:13:41.713 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:41.713 "is_configured": true, 00:13:41.713 "data_offset": 2048, 00:13:41.713 "data_size": 63488 00:13:41.713 } 00:13:41.713 ] 00:13:41.713 }' 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.713 [2024-11-26 19:03:08.229638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:41.713 [2024-11-26 19:03:08.260488] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:41.713 [2024-11-26 19:03:08.260575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.713 [2024-11-26 19:03:08.260602] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:41.713 [2024-11-26 19:03:08.260619] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.713 19:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.972 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.972 "name": "raid_bdev1", 00:13:41.972 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:41.972 "strip_size_kb": 0, 00:13:41.972 "state": "online", 00:13:41.972 "raid_level": "raid1", 00:13:41.972 "superblock": true, 00:13:41.972 "num_base_bdevs": 2, 00:13:41.972 "num_base_bdevs_discovered": 1, 00:13:41.972 "num_base_bdevs_operational": 1, 00:13:41.972 "base_bdevs_list": [ 00:13:41.972 { 00:13:41.972 "name": null, 00:13:41.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.972 "is_configured": false, 00:13:41.972 "data_offset": 0, 00:13:41.972 "data_size": 63488 00:13:41.972 }, 00:13:41.972 { 00:13:41.972 "name": "BaseBdev2", 00:13:41.972 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:41.972 "is_configured": true, 00:13:41.972 "data_offset": 2048, 00:13:41.972 "data_size": 63488 00:13:41.972 } 00:13:41.972 ] 00:13:41.972 }' 00:13:41.972 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.972 19:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.230 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.230 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.230 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.230 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.230 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.230 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.230 19:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.230 19:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.230 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.230 19:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.230 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.230 "name": "raid_bdev1", 00:13:42.230 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:42.230 "strip_size_kb": 0, 00:13:42.230 "state": "online", 00:13:42.230 "raid_level": "raid1", 00:13:42.230 "superblock": true, 00:13:42.230 "num_base_bdevs": 2, 00:13:42.230 "num_base_bdevs_discovered": 1, 00:13:42.230 "num_base_bdevs_operational": 1, 00:13:42.230 "base_bdevs_list": [ 00:13:42.230 { 00:13:42.230 "name": null, 00:13:42.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.230 "is_configured": false, 00:13:42.230 "data_offset": 0, 00:13:42.230 "data_size": 63488 00:13:42.230 }, 00:13:42.230 { 00:13:42.230 "name": "BaseBdev2", 00:13:42.230 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:42.230 "is_configured": true, 00:13:42.230 "data_offset": 2048, 00:13:42.230 "data_size": 63488 00:13:42.230 } 00:13:42.230 ] 00:13:42.230 }' 00:13:42.230 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.488 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.488 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.488 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:42.488 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:42.488 19:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.488 19:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.488 [2024-11-26 19:03:08.951016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.488 [2024-11-26 19:03:08.967776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:42.488 19:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.488 19:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:42.488 [2024-11-26 19:03:08.970524] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:43.427 19:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.427 19:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.427 19:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.427 19:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.427 19:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.427 19:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.427 19:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.427 19:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.427 19:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.427 19:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.427 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.427 "name": "raid_bdev1", 00:13:43.427 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:43.427 "strip_size_kb": 0, 00:13:43.427 "state": "online", 00:13:43.427 "raid_level": "raid1", 00:13:43.427 "superblock": true, 00:13:43.427 "num_base_bdevs": 2, 00:13:43.427 "num_base_bdevs_discovered": 2, 00:13:43.427 "num_base_bdevs_operational": 2, 00:13:43.427 "process": { 00:13:43.427 "type": "rebuild", 00:13:43.427 "target": "spare", 00:13:43.427 "progress": { 00:13:43.427 "blocks": 20480, 00:13:43.427 "percent": 32 00:13:43.427 } 00:13:43.427 }, 00:13:43.427 "base_bdevs_list": [ 00:13:43.427 { 00:13:43.427 "name": "spare", 00:13:43.427 "uuid": "2323c108-c648-55c2-98e9-a5f1180a485f", 00:13:43.427 "is_configured": true, 00:13:43.427 "data_offset": 2048, 00:13:43.427 "data_size": 63488 00:13:43.427 }, 00:13:43.427 { 00:13:43.427 "name": "BaseBdev2", 00:13:43.427 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:43.427 "is_configured": true, 00:13:43.427 "data_offset": 2048, 00:13:43.427 "data_size": 63488 00:13:43.427 } 00:13:43.427 ] 00:13:43.427 }' 00:13:43.427 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:43.686 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=428 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.686 "name": "raid_bdev1", 00:13:43.686 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:43.686 "strip_size_kb": 0, 00:13:43.686 "state": "online", 00:13:43.686 "raid_level": "raid1", 00:13:43.686 "superblock": true, 00:13:43.686 "num_base_bdevs": 2, 00:13:43.686 "num_base_bdevs_discovered": 2, 00:13:43.686 "num_base_bdevs_operational": 2, 00:13:43.686 "process": { 00:13:43.686 "type": "rebuild", 00:13:43.686 "target": "spare", 00:13:43.686 "progress": { 00:13:43.686 "blocks": 22528, 00:13:43.686 "percent": 35 00:13:43.686 } 00:13:43.686 }, 00:13:43.686 "base_bdevs_list": [ 00:13:43.686 { 00:13:43.686 "name": "spare", 00:13:43.686 "uuid": "2323c108-c648-55c2-98e9-a5f1180a485f", 00:13:43.686 "is_configured": true, 00:13:43.686 "data_offset": 2048, 00:13:43.686 "data_size": 63488 00:13:43.686 }, 00:13:43.686 { 00:13:43.686 "name": "BaseBdev2", 00:13:43.686 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:43.686 "is_configured": true, 00:13:43.686 "data_offset": 2048, 00:13:43.686 "data_size": 63488 00:13:43.686 } 00:13:43.686 ] 00:13:43.686 }' 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.686 19:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:44.648 19:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:44.648 19:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.648 19:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.648 19:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.648 19:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.648 19:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.648 19:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.649 19:03:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.649 19:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.649 19:03:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.907 19:03:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.907 19:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.907 "name": "raid_bdev1", 00:13:44.907 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:44.907 "strip_size_kb": 0, 00:13:44.907 "state": "online", 00:13:44.907 "raid_level": "raid1", 00:13:44.907 "superblock": true, 00:13:44.907 "num_base_bdevs": 2, 00:13:44.907 "num_base_bdevs_discovered": 2, 00:13:44.907 "num_base_bdevs_operational": 2, 00:13:44.907 "process": { 00:13:44.907 "type": "rebuild", 00:13:44.907 "target": "spare", 00:13:44.907 "progress": { 00:13:44.907 "blocks": 45056, 00:13:44.907 "percent": 70 00:13:44.907 } 00:13:44.907 }, 00:13:44.907 "base_bdevs_list": [ 00:13:44.907 { 00:13:44.907 "name": "spare", 00:13:44.907 "uuid": "2323c108-c648-55c2-98e9-a5f1180a485f", 00:13:44.907 "is_configured": true, 00:13:44.907 "data_offset": 2048, 00:13:44.907 "data_size": 63488 00:13:44.907 }, 00:13:44.907 { 00:13:44.907 "name": "BaseBdev2", 00:13:44.907 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:44.907 "is_configured": true, 00:13:44.907 "data_offset": 2048, 00:13:44.907 "data_size": 63488 00:13:44.907 } 00:13:44.907 ] 00:13:44.907 }' 00:13:44.907 19:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.907 19:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:44.907 19:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.907 19:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:44.907 19:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:45.843 [2024-11-26 19:03:12.100595] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:45.843 [2024-11-26 19:03:12.100698] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:45.843 [2024-11-26 19:03:12.100883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.843 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:45.843 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.843 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.843 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.843 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.843 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.843 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.843 19:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.843 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.843 19:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.843 19:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.843 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.843 "name": "raid_bdev1", 00:13:45.843 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:45.843 "strip_size_kb": 0, 00:13:45.843 "state": "online", 00:13:45.843 "raid_level": "raid1", 00:13:45.843 "superblock": true, 00:13:45.843 "num_base_bdevs": 2, 00:13:45.843 "num_base_bdevs_discovered": 2, 00:13:45.843 "num_base_bdevs_operational": 2, 00:13:45.843 "base_bdevs_list": [ 00:13:45.843 { 00:13:45.843 "name": "spare", 00:13:45.843 "uuid": "2323c108-c648-55c2-98e9-a5f1180a485f", 00:13:45.843 "is_configured": true, 00:13:45.843 "data_offset": 2048, 00:13:45.843 "data_size": 63488 00:13:45.843 }, 00:13:45.843 { 00:13:45.843 "name": "BaseBdev2", 00:13:45.843 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:45.843 "is_configured": true, 00:13:45.843 "data_offset": 2048, 00:13:45.843 "data_size": 63488 00:13:45.843 } 00:13:45.843 ] 00:13:45.843 }' 00:13:46.101 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.101 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:46.101 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.101 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:46.101 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:46.101 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:46.101 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.101 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:46.101 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:46.101 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.101 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.101 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.101 19:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.101 19:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.101 19:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.101 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.101 "name": "raid_bdev1", 00:13:46.101 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:46.101 "strip_size_kb": 0, 00:13:46.101 "state": "online", 00:13:46.101 "raid_level": "raid1", 00:13:46.101 "superblock": true, 00:13:46.101 "num_base_bdevs": 2, 00:13:46.101 "num_base_bdevs_discovered": 2, 00:13:46.101 "num_base_bdevs_operational": 2, 00:13:46.101 "base_bdevs_list": [ 00:13:46.101 { 00:13:46.101 "name": "spare", 00:13:46.101 "uuid": "2323c108-c648-55c2-98e9-a5f1180a485f", 00:13:46.101 "is_configured": true, 00:13:46.101 "data_offset": 2048, 00:13:46.101 "data_size": 63488 00:13:46.101 }, 00:13:46.101 { 00:13:46.101 "name": "BaseBdev2", 00:13:46.101 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:46.101 "is_configured": true, 00:13:46.101 "data_offset": 2048, 00:13:46.101 "data_size": 63488 00:13:46.101 } 00:13:46.101 ] 00:13:46.101 }' 00:13:46.101 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.101 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:46.101 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.359 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:46.359 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:46.359 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.359 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.359 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.359 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.360 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:46.360 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.360 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.360 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.360 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.360 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.360 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.360 19:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.360 19:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.360 19:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.360 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.360 "name": "raid_bdev1", 00:13:46.360 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:46.360 "strip_size_kb": 0, 00:13:46.360 "state": "online", 00:13:46.360 "raid_level": "raid1", 00:13:46.360 "superblock": true, 00:13:46.360 "num_base_bdevs": 2, 00:13:46.360 "num_base_bdevs_discovered": 2, 00:13:46.360 "num_base_bdevs_operational": 2, 00:13:46.360 "base_bdevs_list": [ 00:13:46.360 { 00:13:46.360 "name": "spare", 00:13:46.360 "uuid": "2323c108-c648-55c2-98e9-a5f1180a485f", 00:13:46.360 "is_configured": true, 00:13:46.360 "data_offset": 2048, 00:13:46.360 "data_size": 63488 00:13:46.360 }, 00:13:46.360 { 00:13:46.360 "name": "BaseBdev2", 00:13:46.360 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:46.360 "is_configured": true, 00:13:46.360 "data_offset": 2048, 00:13:46.360 "data_size": 63488 00:13:46.360 } 00:13:46.360 ] 00:13:46.360 }' 00:13:46.360 19:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.360 19:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.927 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:46.927 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.927 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.927 [2024-11-26 19:03:13.260721] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:46.927 [2024-11-26 19:03:13.260768] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:46.927 [2024-11-26 19:03:13.260882] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:46.927 [2024-11-26 19:03:13.261033] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:46.927 [2024-11-26 19:03:13.261056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:46.927 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.927 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.927 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:46.927 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.927 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.927 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.927 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:46.927 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:46.927 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:46.927 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:46.927 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:46.927 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:46.927 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:46.927 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:46.927 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:46.927 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:46.927 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:46.927 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:46.927 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:47.185 /dev/nbd0 00:13:47.186 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:47.186 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:47.186 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:47.186 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:47.186 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:47.186 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:47.186 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:47.186 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:47.186 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:47.186 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:47.186 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:47.186 1+0 records in 00:13:47.186 1+0 records out 00:13:47.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237209 s, 17.3 MB/s 00:13:47.186 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:47.186 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:47.186 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:47.186 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:47.186 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:47.186 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:47.186 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:47.186 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:47.445 /dev/nbd1 00:13:47.445 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:47.445 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:47.445 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:47.445 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:47.445 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:47.445 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:47.445 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:47.445 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:47.445 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:47.445 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:47.445 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:47.445 1+0 records in 00:13:47.445 1+0 records out 00:13:47.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372446 s, 11.0 MB/s 00:13:47.445 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:47.445 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:47.445 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:47.445 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:47.445 19:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:47.445 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:47.445 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:47.445 19:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:47.704 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:47.704 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:47.704 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:47.704 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:47.704 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:47.704 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:47.704 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:47.963 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:47.963 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:47.963 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:47.963 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:47.963 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:47.963 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:47.963 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:47.963 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:47.963 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:47.963 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:48.222 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:48.222 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:48.222 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:48.222 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:48.222 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:48.222 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:48.222 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:48.222 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:48.222 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:48.222 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:48.222 19:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.222 19:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.222 19:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.222 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:48.222 19:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.222 19:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.222 [2024-11-26 19:03:14.714059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:48.222 [2024-11-26 19:03:14.714338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.223 [2024-11-26 19:03:14.714405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:48.223 [2024-11-26 19:03:14.714424] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.223 [2024-11-26 19:03:14.718281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.223 [2024-11-26 19:03:14.718379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:48.223 [2024-11-26 19:03:14.718527] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:48.223 [2024-11-26 19:03:14.718606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:48.223 [2024-11-26 19:03:14.718881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:48.223 spare 00:13:48.223 19:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.223 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:48.223 19:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.223 19:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.223 [2024-11-26 19:03:14.819048] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:48.223 [2024-11-26 19:03:14.819370] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:48.223 [2024-11-26 19:03:14.819906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:48.223 [2024-11-26 19:03:14.820239] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:48.223 [2024-11-26 19:03:14.820277] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:48.223 [2024-11-26 19:03:14.820586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.223 19:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.223 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:48.223 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.223 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.223 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.223 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.223 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:48.223 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.223 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.223 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.223 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.223 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.223 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.223 19:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.223 19:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.223 19:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.482 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.482 "name": "raid_bdev1", 00:13:48.482 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:48.482 "strip_size_kb": 0, 00:13:48.482 "state": "online", 00:13:48.482 "raid_level": "raid1", 00:13:48.482 "superblock": true, 00:13:48.482 "num_base_bdevs": 2, 00:13:48.482 "num_base_bdevs_discovered": 2, 00:13:48.482 "num_base_bdevs_operational": 2, 00:13:48.482 "base_bdevs_list": [ 00:13:48.482 { 00:13:48.482 "name": "spare", 00:13:48.482 "uuid": "2323c108-c648-55c2-98e9-a5f1180a485f", 00:13:48.482 "is_configured": true, 00:13:48.482 "data_offset": 2048, 00:13:48.482 "data_size": 63488 00:13:48.482 }, 00:13:48.482 { 00:13:48.482 "name": "BaseBdev2", 00:13:48.482 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:48.482 "is_configured": true, 00:13:48.482 "data_offset": 2048, 00:13:48.483 "data_size": 63488 00:13:48.483 } 00:13:48.483 ] 00:13:48.483 }' 00:13:48.483 19:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.483 19:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.741 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:48.741 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.741 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:48.741 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:48.741 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.741 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.741 19:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.741 19:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.741 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.741 19:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.000 "name": "raid_bdev1", 00:13:49.000 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:49.000 "strip_size_kb": 0, 00:13:49.000 "state": "online", 00:13:49.000 "raid_level": "raid1", 00:13:49.000 "superblock": true, 00:13:49.000 "num_base_bdevs": 2, 00:13:49.000 "num_base_bdevs_discovered": 2, 00:13:49.000 "num_base_bdevs_operational": 2, 00:13:49.000 "base_bdevs_list": [ 00:13:49.000 { 00:13:49.000 "name": "spare", 00:13:49.000 "uuid": "2323c108-c648-55c2-98e9-a5f1180a485f", 00:13:49.000 "is_configured": true, 00:13:49.000 "data_offset": 2048, 00:13:49.000 "data_size": 63488 00:13:49.000 }, 00:13:49.000 { 00:13:49.000 "name": "BaseBdev2", 00:13:49.000 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:49.000 "is_configured": true, 00:13:49.000 "data_offset": 2048, 00:13:49.000 "data_size": 63488 00:13:49.000 } 00:13:49.000 ] 00:13:49.000 }' 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.000 [2024-11-26 19:03:15.554888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.000 "name": "raid_bdev1", 00:13:49.000 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:49.000 "strip_size_kb": 0, 00:13:49.000 "state": "online", 00:13:49.000 "raid_level": "raid1", 00:13:49.000 "superblock": true, 00:13:49.000 "num_base_bdevs": 2, 00:13:49.000 "num_base_bdevs_discovered": 1, 00:13:49.000 "num_base_bdevs_operational": 1, 00:13:49.000 "base_bdevs_list": [ 00:13:49.000 { 00:13:49.000 "name": null, 00:13:49.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.000 "is_configured": false, 00:13:49.000 "data_offset": 0, 00:13:49.000 "data_size": 63488 00:13:49.000 }, 00:13:49.000 { 00:13:49.000 "name": "BaseBdev2", 00:13:49.000 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:49.000 "is_configured": true, 00:13:49.000 "data_offset": 2048, 00:13:49.000 "data_size": 63488 00:13:49.000 } 00:13:49.000 ] 00:13:49.000 }' 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.000 19:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.609 19:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:49.609 19:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.609 19:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.609 [2024-11-26 19:03:16.091109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:49.609 [2024-11-26 19:03:16.091428] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:49.609 [2024-11-26 19:03:16.091467] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:49.609 [2024-11-26 19:03:16.091526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:49.609 [2024-11-26 19:03:16.109213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:49.609 19:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.609 19:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:49.609 [2024-11-26 19:03:16.111997] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:50.545 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:50.545 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.545 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:50.545 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:50.545 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.545 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.545 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.545 19:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.545 19:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.545 19:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.803 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.803 "name": "raid_bdev1", 00:13:50.803 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:50.803 "strip_size_kb": 0, 00:13:50.803 "state": "online", 00:13:50.803 "raid_level": "raid1", 00:13:50.803 "superblock": true, 00:13:50.803 "num_base_bdevs": 2, 00:13:50.803 "num_base_bdevs_discovered": 2, 00:13:50.803 "num_base_bdevs_operational": 2, 00:13:50.803 "process": { 00:13:50.804 "type": "rebuild", 00:13:50.804 "target": "spare", 00:13:50.804 "progress": { 00:13:50.804 "blocks": 18432, 00:13:50.804 "percent": 29 00:13:50.804 } 00:13:50.804 }, 00:13:50.804 "base_bdevs_list": [ 00:13:50.804 { 00:13:50.804 "name": "spare", 00:13:50.804 "uuid": "2323c108-c648-55c2-98e9-a5f1180a485f", 00:13:50.804 "is_configured": true, 00:13:50.804 "data_offset": 2048, 00:13:50.804 "data_size": 63488 00:13:50.804 }, 00:13:50.804 { 00:13:50.804 "name": "BaseBdev2", 00:13:50.804 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:50.804 "is_configured": true, 00:13:50.804 "data_offset": 2048, 00:13:50.804 "data_size": 63488 00:13:50.804 } 00:13:50.804 ] 00:13:50.804 }' 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.804 [2024-11-26 19:03:17.290249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:50.804 [2024-11-26 19:03:17.324481] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:50.804 [2024-11-26 19:03:17.324622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.804 [2024-11-26 19:03:17.324661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:50.804 [2024-11-26 19:03:17.324679] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.804 "name": "raid_bdev1", 00:13:50.804 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:50.804 "strip_size_kb": 0, 00:13:50.804 "state": "online", 00:13:50.804 "raid_level": "raid1", 00:13:50.804 "superblock": true, 00:13:50.804 "num_base_bdevs": 2, 00:13:50.804 "num_base_bdevs_discovered": 1, 00:13:50.804 "num_base_bdevs_operational": 1, 00:13:50.804 "base_bdevs_list": [ 00:13:50.804 { 00:13:50.804 "name": null, 00:13:50.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.804 "is_configured": false, 00:13:50.804 "data_offset": 0, 00:13:50.804 "data_size": 63488 00:13:50.804 }, 00:13:50.804 { 00:13:50.804 "name": "BaseBdev2", 00:13:50.804 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:50.804 "is_configured": true, 00:13:50.804 "data_offset": 2048, 00:13:50.804 "data_size": 63488 00:13:50.804 } 00:13:50.804 ] 00:13:50.804 }' 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.804 19:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.370 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:51.370 19:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.370 19:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.370 [2024-11-26 19:03:17.857257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:51.370 [2024-11-26 19:03:17.857372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.370 [2024-11-26 19:03:17.857409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:51.370 [2024-11-26 19:03:17.857428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.370 [2024-11-26 19:03:17.858126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.370 [2024-11-26 19:03:17.858174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:51.370 [2024-11-26 19:03:17.858371] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:51.370 [2024-11-26 19:03:17.858400] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:51.370 [2024-11-26 19:03:17.858416] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:51.370 [2024-11-26 19:03:17.858454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:51.370 [2024-11-26 19:03:17.876237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:51.370 spare 00:13:51.370 19:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.370 19:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:51.370 [2024-11-26 19:03:17.879094] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:52.307 19:03:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.307 19:03:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.307 19:03:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.307 19:03:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.307 19:03:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.307 19:03:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.307 19:03:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.307 19:03:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.307 19:03:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.307 19:03:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.567 19:03:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.567 "name": "raid_bdev1", 00:13:52.567 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:52.567 "strip_size_kb": 0, 00:13:52.567 "state": "online", 00:13:52.567 "raid_level": "raid1", 00:13:52.567 "superblock": true, 00:13:52.567 "num_base_bdevs": 2, 00:13:52.567 "num_base_bdevs_discovered": 2, 00:13:52.567 "num_base_bdevs_operational": 2, 00:13:52.567 "process": { 00:13:52.567 "type": "rebuild", 00:13:52.567 "target": "spare", 00:13:52.567 "progress": { 00:13:52.567 "blocks": 20480, 00:13:52.567 "percent": 32 00:13:52.567 } 00:13:52.567 }, 00:13:52.567 "base_bdevs_list": [ 00:13:52.567 { 00:13:52.567 "name": "spare", 00:13:52.567 "uuid": "2323c108-c648-55c2-98e9-a5f1180a485f", 00:13:52.567 "is_configured": true, 00:13:52.567 "data_offset": 2048, 00:13:52.567 "data_size": 63488 00:13:52.567 }, 00:13:52.567 { 00:13:52.567 "name": "BaseBdev2", 00:13:52.567 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:52.567 "is_configured": true, 00:13:52.567 "data_offset": 2048, 00:13:52.567 "data_size": 63488 00:13:52.567 } 00:13:52.567 ] 00:13:52.567 }' 00:13:52.567 19:03:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.567 19:03:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.567 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.567 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.567 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:52.567 19:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.567 19:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.567 [2024-11-26 19:03:19.057562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:52.567 [2024-11-26 19:03:19.091208] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:52.567 [2024-11-26 19:03:19.091319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.567 [2024-11-26 19:03:19.091354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:52.567 [2024-11-26 19:03:19.091367] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:52.567 19:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.567 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:52.567 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.567 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.567 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.567 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.567 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:52.567 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.567 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.567 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.567 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.567 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.567 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.567 19:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.567 19:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.567 19:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.826 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.826 "name": "raid_bdev1", 00:13:52.826 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:52.826 "strip_size_kb": 0, 00:13:52.826 "state": "online", 00:13:52.826 "raid_level": "raid1", 00:13:52.826 "superblock": true, 00:13:52.826 "num_base_bdevs": 2, 00:13:52.826 "num_base_bdevs_discovered": 1, 00:13:52.826 "num_base_bdevs_operational": 1, 00:13:52.826 "base_bdevs_list": [ 00:13:52.826 { 00:13:52.826 "name": null, 00:13:52.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.826 "is_configured": false, 00:13:52.826 "data_offset": 0, 00:13:52.826 "data_size": 63488 00:13:52.826 }, 00:13:52.826 { 00:13:52.826 "name": "BaseBdev2", 00:13:52.826 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:52.826 "is_configured": true, 00:13:52.826 "data_offset": 2048, 00:13:52.826 "data_size": 63488 00:13:52.826 } 00:13:52.826 ] 00:13:52.826 }' 00:13:52.826 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.826 19:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.084 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:53.084 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.084 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:53.084 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:53.084 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.084 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.084 19:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.084 19:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.084 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.084 19:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.343 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.343 "name": "raid_bdev1", 00:13:53.343 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:53.343 "strip_size_kb": 0, 00:13:53.343 "state": "online", 00:13:53.343 "raid_level": "raid1", 00:13:53.343 "superblock": true, 00:13:53.343 "num_base_bdevs": 2, 00:13:53.343 "num_base_bdevs_discovered": 1, 00:13:53.343 "num_base_bdevs_operational": 1, 00:13:53.343 "base_bdevs_list": [ 00:13:53.343 { 00:13:53.343 "name": null, 00:13:53.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.343 "is_configured": false, 00:13:53.343 "data_offset": 0, 00:13:53.343 "data_size": 63488 00:13:53.343 }, 00:13:53.343 { 00:13:53.343 "name": "BaseBdev2", 00:13:53.343 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:53.343 "is_configured": true, 00:13:53.343 "data_offset": 2048, 00:13:53.343 "data_size": 63488 00:13:53.343 } 00:13:53.343 ] 00:13:53.343 }' 00:13:53.343 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.343 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:53.343 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.343 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:53.343 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:53.343 19:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.343 19:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.343 19:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.343 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:53.343 19:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.343 19:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.343 [2024-11-26 19:03:19.817909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:53.343 [2024-11-26 19:03:19.817993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.343 [2024-11-26 19:03:19.818037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:53.343 [2024-11-26 19:03:19.818066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.343 [2024-11-26 19:03:19.818719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.343 [2024-11-26 19:03:19.818757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:53.343 [2024-11-26 19:03:19.818878] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:53.343 [2024-11-26 19:03:19.818902] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:53.343 [2024-11-26 19:03:19.818920] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:53.343 [2024-11-26 19:03:19.818935] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:53.343 BaseBdev1 00:13:53.343 19:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.343 19:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:54.279 19:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:54.279 19:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.279 19:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.279 19:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.279 19:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.279 19:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:54.279 19:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.279 19:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.279 19:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.279 19:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.279 19:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.279 19:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.279 19:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.279 19:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.279 19:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.279 19:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.279 "name": "raid_bdev1", 00:13:54.279 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:54.279 "strip_size_kb": 0, 00:13:54.279 "state": "online", 00:13:54.279 "raid_level": "raid1", 00:13:54.279 "superblock": true, 00:13:54.279 "num_base_bdevs": 2, 00:13:54.279 "num_base_bdevs_discovered": 1, 00:13:54.279 "num_base_bdevs_operational": 1, 00:13:54.279 "base_bdevs_list": [ 00:13:54.279 { 00:13:54.279 "name": null, 00:13:54.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.279 "is_configured": false, 00:13:54.279 "data_offset": 0, 00:13:54.279 "data_size": 63488 00:13:54.279 }, 00:13:54.279 { 00:13:54.279 "name": "BaseBdev2", 00:13:54.279 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:54.279 "is_configured": true, 00:13:54.279 "data_offset": 2048, 00:13:54.279 "data_size": 63488 00:13:54.279 } 00:13:54.279 ] 00:13:54.279 }' 00:13:54.279 19:03:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.279 19:03:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.847 19:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:54.847 19:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.847 19:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:54.847 19:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:54.847 19:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.847 19:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.847 19:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.847 19:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.847 19:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.847 19:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.847 19:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.847 "name": "raid_bdev1", 00:13:54.847 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:54.848 "strip_size_kb": 0, 00:13:54.848 "state": "online", 00:13:54.848 "raid_level": "raid1", 00:13:54.848 "superblock": true, 00:13:54.848 "num_base_bdevs": 2, 00:13:54.848 "num_base_bdevs_discovered": 1, 00:13:54.848 "num_base_bdevs_operational": 1, 00:13:54.848 "base_bdevs_list": [ 00:13:54.848 { 00:13:54.848 "name": null, 00:13:54.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.848 "is_configured": false, 00:13:54.848 "data_offset": 0, 00:13:54.848 "data_size": 63488 00:13:54.848 }, 00:13:54.848 { 00:13:54.848 "name": "BaseBdev2", 00:13:54.848 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:54.848 "is_configured": true, 00:13:54.848 "data_offset": 2048, 00:13:54.848 "data_size": 63488 00:13:54.848 } 00:13:54.848 ] 00:13:54.848 }' 00:13:54.848 19:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.848 19:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:54.848 19:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.107 19:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:55.107 19:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:55.107 19:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:55.107 19:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:55.107 19:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:55.107 19:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:55.107 19:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:55.107 19:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:55.107 19:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:55.107 19:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.107 19:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.107 [2024-11-26 19:03:21.530588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.107 [2024-11-26 19:03:21.530900] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:55.107 [2024-11-26 19:03:21.530967] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:55.107 request: 00:13:55.107 { 00:13:55.107 "base_bdev": "BaseBdev1", 00:13:55.107 "raid_bdev": "raid_bdev1", 00:13:55.107 "method": "bdev_raid_add_base_bdev", 00:13:55.107 "req_id": 1 00:13:55.107 } 00:13:55.107 Got JSON-RPC error response 00:13:55.107 response: 00:13:55.107 { 00:13:55.107 "code": -22, 00:13:55.107 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:55.107 } 00:13:55.107 19:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:55.107 19:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:55.107 19:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:55.107 19:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:55.107 19:03:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:55.107 19:03:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:56.046 19:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:56.046 19:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.046 19:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.046 19:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.046 19:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.046 19:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:56.046 19:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.046 19:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.046 19:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.046 19:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.046 19:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.046 19:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.046 19:03:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.046 19:03:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.046 19:03:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.046 19:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.046 "name": "raid_bdev1", 00:13:56.046 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:56.046 "strip_size_kb": 0, 00:13:56.046 "state": "online", 00:13:56.046 "raid_level": "raid1", 00:13:56.046 "superblock": true, 00:13:56.046 "num_base_bdevs": 2, 00:13:56.046 "num_base_bdevs_discovered": 1, 00:13:56.046 "num_base_bdevs_operational": 1, 00:13:56.046 "base_bdevs_list": [ 00:13:56.046 { 00:13:56.046 "name": null, 00:13:56.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.046 "is_configured": false, 00:13:56.046 "data_offset": 0, 00:13:56.046 "data_size": 63488 00:13:56.046 }, 00:13:56.046 { 00:13:56.046 "name": "BaseBdev2", 00:13:56.046 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:56.046 "is_configured": true, 00:13:56.046 "data_offset": 2048, 00:13:56.046 "data_size": 63488 00:13:56.046 } 00:13:56.046 ] 00:13:56.046 }' 00:13:56.046 19:03:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.046 19:03:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.615 19:03:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:56.615 19:03:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.615 19:03:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:56.615 19:03:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:56.615 19:03:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.615 19:03:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.615 19:03:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.615 19:03:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.615 19:03:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.615 19:03:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.615 19:03:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.615 "name": "raid_bdev1", 00:13:56.615 "uuid": "d2ecaca5-f357-4766-9cd6-3c7236034bd1", 00:13:56.615 "strip_size_kb": 0, 00:13:56.615 "state": "online", 00:13:56.615 "raid_level": "raid1", 00:13:56.615 "superblock": true, 00:13:56.615 "num_base_bdevs": 2, 00:13:56.615 "num_base_bdevs_discovered": 1, 00:13:56.615 "num_base_bdevs_operational": 1, 00:13:56.615 "base_bdevs_list": [ 00:13:56.615 { 00:13:56.615 "name": null, 00:13:56.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.615 "is_configured": false, 00:13:56.615 "data_offset": 0, 00:13:56.615 "data_size": 63488 00:13:56.615 }, 00:13:56.615 { 00:13:56.615 "name": "BaseBdev2", 00:13:56.615 "uuid": "b099c293-31f9-57d1-ae94-a87ffd1b95bd", 00:13:56.615 "is_configured": true, 00:13:56.615 "data_offset": 2048, 00:13:56.615 "data_size": 63488 00:13:56.615 } 00:13:56.615 ] 00:13:56.615 }' 00:13:56.615 19:03:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.615 19:03:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:56.615 19:03:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.615 19:03:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:56.615 19:03:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76325 00:13:56.615 19:03:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76325 ']' 00:13:56.615 19:03:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76325 00:13:56.615 19:03:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:56.615 19:03:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.615 19:03:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76325 00:13:56.875 killing process with pid 76325 00:13:56.875 Received shutdown signal, test time was about 60.000000 seconds 00:13:56.875 00:13:56.875 Latency(us) 00:13:56.875 [2024-11-26T19:03:23.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.875 [2024-11-26T19:03:23.498Z] =================================================================================================================== 00:13:56.875 [2024-11-26T19:03:23.498Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:56.875 19:03:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:56.875 19:03:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:56.875 19:03:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76325' 00:13:56.875 19:03:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76325 00:13:56.875 [2024-11-26 19:03:23.258807] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:56.875 19:03:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76325 00:13:56.875 [2024-11-26 19:03:23.259014] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.875 [2024-11-26 19:03:23.259122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:56.875 [2024-11-26 19:03:23.259145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:57.134 [2024-11-26 19:03:23.559987] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:58.512 00:13:58.512 real 0m26.560s 00:13:58.512 user 0m32.889s 00:13:58.512 sys 0m4.053s 00:13:58.512 ************************************ 00:13:58.512 END TEST raid_rebuild_test_sb 00:13:58.512 ************************************ 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.512 19:03:24 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:58.512 19:03:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:58.512 19:03:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.512 19:03:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:58.512 ************************************ 00:13:58.512 START TEST raid_rebuild_test_io 00:13:58.512 ************************************ 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:58.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77087 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77087 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 77087 ']' 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.512 19:03:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.512 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:58.512 Zero copy mechanism will not be used. 00:13:58.512 [2024-11-26 19:03:24.932438] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:13:58.512 [2024-11-26 19:03:24.932634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77087 ] 00:13:58.512 [2024-11-26 19:03:25.122688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.771 [2024-11-26 19:03:25.300318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.030 [2024-11-26 19:03:25.530110] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.030 [2024-11-26 19:03:25.530470] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.648 19:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.648 19:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:59.648 19:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.648 19:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:59.648 19:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.648 19:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.648 BaseBdev1_malloc 00:13:59.648 19:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.648 19:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:59.648 19:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.648 19:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.648 [2024-11-26 19:03:25.972134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:59.648 [2024-11-26 19:03:25.972217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.648 [2024-11-26 19:03:25.972251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:59.648 [2024-11-26 19:03:25.972271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.648 [2024-11-26 19:03:25.975230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.648 [2024-11-26 19:03:25.975294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:59.648 BaseBdev1 00:13:59.648 19:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.648 19:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.648 19:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:59.648 19:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.648 19:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.648 BaseBdev2_malloc 00:13:59.648 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.648 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:59.648 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.648 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.648 [2024-11-26 19:03:26.028682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:59.648 [2024-11-26 19:03:26.028795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.649 [2024-11-26 19:03:26.028833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:59.649 [2024-11-26 19:03:26.028851] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.649 [2024-11-26 19:03:26.032032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.649 [2024-11-26 19:03:26.032085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:59.649 BaseBdev2 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.649 spare_malloc 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.649 spare_delay 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.649 [2024-11-26 19:03:26.102689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:59.649 [2024-11-26 19:03:26.102931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.649 [2024-11-26 19:03:26.102976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:59.649 [2024-11-26 19:03:26.102997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.649 [2024-11-26 19:03:26.106067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.649 [2024-11-26 19:03:26.106268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:59.649 spare 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.649 [2024-11-26 19:03:26.115016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:59.649 [2024-11-26 19:03:26.117608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:59.649 [2024-11-26 19:03:26.117746] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:59.649 [2024-11-26 19:03:26.117769] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:59.649 [2024-11-26 19:03:26.118131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:59.649 [2024-11-26 19:03:26.118378] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:59.649 [2024-11-26 19:03:26.118407] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:59.649 [2024-11-26 19:03:26.118631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.649 "name": "raid_bdev1", 00:13:59.649 "uuid": "7536fce4-7d5d-40e0-bdc3-ebb83b6fd184", 00:13:59.649 "strip_size_kb": 0, 00:13:59.649 "state": "online", 00:13:59.649 "raid_level": "raid1", 00:13:59.649 "superblock": false, 00:13:59.649 "num_base_bdevs": 2, 00:13:59.649 "num_base_bdevs_discovered": 2, 00:13:59.649 "num_base_bdevs_operational": 2, 00:13:59.649 "base_bdevs_list": [ 00:13:59.649 { 00:13:59.649 "name": "BaseBdev1", 00:13:59.649 "uuid": "1a405e93-3185-5789-b428-86e34a2828be", 00:13:59.649 "is_configured": true, 00:13:59.649 "data_offset": 0, 00:13:59.649 "data_size": 65536 00:13:59.649 }, 00:13:59.649 { 00:13:59.649 "name": "BaseBdev2", 00:13:59.649 "uuid": "a9c032a0-de28-5307-b43e-d3b327c78738", 00:13:59.649 "is_configured": true, 00:13:59.649 "data_offset": 0, 00:13:59.649 "data_size": 65536 00:13:59.649 } 00:13:59.649 ] 00:13:59.649 }' 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.649 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.217 [2024-11-26 19:03:26.655541] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.217 [2024-11-26 19:03:26.759130] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.217 "name": "raid_bdev1", 00:14:00.217 "uuid": "7536fce4-7d5d-40e0-bdc3-ebb83b6fd184", 00:14:00.217 "strip_size_kb": 0, 00:14:00.217 "state": "online", 00:14:00.217 "raid_level": "raid1", 00:14:00.217 "superblock": false, 00:14:00.217 "num_base_bdevs": 2, 00:14:00.217 "num_base_bdevs_discovered": 1, 00:14:00.217 "num_base_bdevs_operational": 1, 00:14:00.217 "base_bdevs_list": [ 00:14:00.217 { 00:14:00.217 "name": null, 00:14:00.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.217 "is_configured": false, 00:14:00.217 "data_offset": 0, 00:14:00.217 "data_size": 65536 00:14:00.217 }, 00:14:00.217 { 00:14:00.217 "name": "BaseBdev2", 00:14:00.217 "uuid": "a9c032a0-de28-5307-b43e-d3b327c78738", 00:14:00.217 "is_configured": true, 00:14:00.217 "data_offset": 0, 00:14:00.217 "data_size": 65536 00:14:00.217 } 00:14:00.217 ] 00:14:00.217 }' 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.217 19:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.476 [2024-11-26 19:03:26.912387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:00.476 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:00.476 Zero copy mechanism will not be used. 00:14:00.476 Running I/O for 60 seconds... 00:14:00.735 19:03:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:00.735 19:03:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.735 19:03:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.735 [2024-11-26 19:03:27.325295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:00.994 19:03:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.994 19:03:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:00.994 [2024-11-26 19:03:27.415572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:00.994 [2024-11-26 19:03:27.418357] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:00.994 [2024-11-26 19:03:27.538142] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:00.994 [2024-11-26 19:03:27.539113] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:01.252 [2024-11-26 19:03:27.666511] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:01.252 [2024-11-26 19:03:27.667072] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:01.510 142.00 IOPS, 426.00 MiB/s [2024-11-26T19:03:28.133Z] [2024-11-26 19:03:28.039321] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:01.769 [2024-11-26 19:03:28.169301] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:01.769 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.769 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.769 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.769 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.769 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.027 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.027 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.027 19:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.027 19:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.027 19:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.027 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.027 "name": "raid_bdev1", 00:14:02.027 "uuid": "7536fce4-7d5d-40e0-bdc3-ebb83b6fd184", 00:14:02.027 "strip_size_kb": 0, 00:14:02.027 "state": "online", 00:14:02.027 "raid_level": "raid1", 00:14:02.027 "superblock": false, 00:14:02.027 "num_base_bdevs": 2, 00:14:02.027 "num_base_bdevs_discovered": 2, 00:14:02.028 "num_base_bdevs_operational": 2, 00:14:02.028 "process": { 00:14:02.028 "type": "rebuild", 00:14:02.028 "target": "spare", 00:14:02.028 "progress": { 00:14:02.028 "blocks": 12288, 00:14:02.028 "percent": 18 00:14:02.028 } 00:14:02.028 }, 00:14:02.028 "base_bdevs_list": [ 00:14:02.028 { 00:14:02.028 "name": "spare", 00:14:02.028 "uuid": "b36e90b9-f792-5b53-8d35-839ff10347a3", 00:14:02.028 "is_configured": true, 00:14:02.028 "data_offset": 0, 00:14:02.028 "data_size": 65536 00:14:02.028 }, 00:14:02.028 { 00:14:02.028 "name": "BaseBdev2", 00:14:02.028 "uuid": "a9c032a0-de28-5307-b43e-d3b327c78738", 00:14:02.028 "is_configured": true, 00:14:02.028 "data_offset": 0, 00:14:02.028 "data_size": 65536 00:14:02.028 } 00:14:02.028 ] 00:14:02.028 }' 00:14:02.028 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.028 [2024-11-26 19:03:28.451746] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:02.028 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.028 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.028 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.028 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:02.028 19:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.028 19:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.028 [2024-11-26 19:03:28.552475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.028 [2024-11-26 19:03:28.620414] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:02.028 [2024-11-26 19:03:28.632675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.028 [2024-11-26 19:03:28.632778] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.028 [2024-11-26 19:03:28.632795] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:02.287 [2024-11-26 19:03:28.678271] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:02.287 19:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.287 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:02.287 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.287 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.287 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.287 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.287 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:02.287 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.287 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.287 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.287 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.287 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.287 19:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.287 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.287 19:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.287 19:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.287 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.287 "name": "raid_bdev1", 00:14:02.287 "uuid": "7536fce4-7d5d-40e0-bdc3-ebb83b6fd184", 00:14:02.287 "strip_size_kb": 0, 00:14:02.287 "state": "online", 00:14:02.287 "raid_level": "raid1", 00:14:02.287 "superblock": false, 00:14:02.287 "num_base_bdevs": 2, 00:14:02.287 "num_base_bdevs_discovered": 1, 00:14:02.287 "num_base_bdevs_operational": 1, 00:14:02.287 "base_bdevs_list": [ 00:14:02.287 { 00:14:02.287 "name": null, 00:14:02.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.287 "is_configured": false, 00:14:02.287 "data_offset": 0, 00:14:02.287 "data_size": 65536 00:14:02.287 }, 00:14:02.287 { 00:14:02.287 "name": "BaseBdev2", 00:14:02.287 "uuid": "a9c032a0-de28-5307-b43e-d3b327c78738", 00:14:02.287 "is_configured": true, 00:14:02.287 "data_offset": 0, 00:14:02.287 "data_size": 65536 00:14:02.287 } 00:14:02.287 ] 00:14:02.287 }' 00:14:02.287 19:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.287 19:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.803 130.00 IOPS, 390.00 MiB/s [2024-11-26T19:03:29.426Z] 19:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.803 19:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.803 19:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.803 19:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.803 19:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.803 19:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.803 19:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.803 19:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.803 19:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.803 19:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.803 19:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.803 "name": "raid_bdev1", 00:14:02.803 "uuid": "7536fce4-7d5d-40e0-bdc3-ebb83b6fd184", 00:14:02.803 "strip_size_kb": 0, 00:14:02.803 "state": "online", 00:14:02.803 "raid_level": "raid1", 00:14:02.803 "superblock": false, 00:14:02.803 "num_base_bdevs": 2, 00:14:02.803 "num_base_bdevs_discovered": 1, 00:14:02.803 "num_base_bdevs_operational": 1, 00:14:02.803 "base_bdevs_list": [ 00:14:02.803 { 00:14:02.803 "name": null, 00:14:02.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.803 "is_configured": false, 00:14:02.803 "data_offset": 0, 00:14:02.803 "data_size": 65536 00:14:02.803 }, 00:14:02.803 { 00:14:02.803 "name": "BaseBdev2", 00:14:02.803 "uuid": "a9c032a0-de28-5307-b43e-d3b327c78738", 00:14:02.803 "is_configured": true, 00:14:02.803 "data_offset": 0, 00:14:02.803 "data_size": 65536 00:14:02.803 } 00:14:02.803 ] 00:14:02.803 }' 00:14:02.803 19:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.803 19:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.803 19:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.803 19:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.803 19:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:02.803 19:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.803 19:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.803 [2024-11-26 19:03:29.401072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.063 19:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.063 19:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:03.063 [2024-11-26 19:03:29.477294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:03.063 [2024-11-26 19:03:29.480146] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:03.063 [2024-11-26 19:03:29.600149] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:03.063 [2024-11-26 19:03:29.601158] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:03.321 [2024-11-26 19:03:29.830142] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:03.321 [2024-11-26 19:03:29.830773] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:03.590 131.33 IOPS, 394.00 MiB/s [2024-11-26T19:03:30.213Z] [2024-11-26 19:03:30.179297] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:03.848 [2024-11-26 19:03:30.425701] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:03.848 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.848 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.848 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.848 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.848 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.848 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.848 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.848 19:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.848 19:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.108 19:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.108 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.108 "name": "raid_bdev1", 00:14:04.108 "uuid": "7536fce4-7d5d-40e0-bdc3-ebb83b6fd184", 00:14:04.108 "strip_size_kb": 0, 00:14:04.108 "state": "online", 00:14:04.108 "raid_level": "raid1", 00:14:04.108 "superblock": false, 00:14:04.108 "num_base_bdevs": 2, 00:14:04.108 "num_base_bdevs_discovered": 2, 00:14:04.108 "num_base_bdevs_operational": 2, 00:14:04.108 "process": { 00:14:04.108 "type": "rebuild", 00:14:04.108 "target": "spare", 00:14:04.108 "progress": { 00:14:04.108 "blocks": 10240, 00:14:04.108 "percent": 15 00:14:04.108 } 00:14:04.108 }, 00:14:04.108 "base_bdevs_list": [ 00:14:04.108 { 00:14:04.108 "name": "spare", 00:14:04.108 "uuid": "b36e90b9-f792-5b53-8d35-839ff10347a3", 00:14:04.108 "is_configured": true, 00:14:04.108 "data_offset": 0, 00:14:04.108 "data_size": 65536 00:14:04.108 }, 00:14:04.108 { 00:14:04.108 "name": "BaseBdev2", 00:14:04.108 "uuid": "a9c032a0-de28-5307-b43e-d3b327c78738", 00:14:04.108 "is_configured": true, 00:14:04.108 "data_offset": 0, 00:14:04.108 "data_size": 65536 00:14:04.108 } 00:14:04.108 ] 00:14:04.108 }' 00:14:04.108 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.108 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.108 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.108 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.108 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:04.108 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:04.108 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:04.109 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:04.109 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=448 00:14:04.109 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:04.109 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.109 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.109 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.109 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.109 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.109 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.109 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.109 19:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.109 19:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.109 19:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.109 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.109 "name": "raid_bdev1", 00:14:04.109 "uuid": "7536fce4-7d5d-40e0-bdc3-ebb83b6fd184", 00:14:04.109 "strip_size_kb": 0, 00:14:04.109 "state": "online", 00:14:04.109 "raid_level": "raid1", 00:14:04.109 "superblock": false, 00:14:04.109 "num_base_bdevs": 2, 00:14:04.109 "num_base_bdevs_discovered": 2, 00:14:04.109 "num_base_bdevs_operational": 2, 00:14:04.109 "process": { 00:14:04.109 "type": "rebuild", 00:14:04.109 "target": "spare", 00:14:04.109 "progress": { 00:14:04.109 "blocks": 10240, 00:14:04.109 "percent": 15 00:14:04.109 } 00:14:04.109 }, 00:14:04.109 "base_bdevs_list": [ 00:14:04.109 { 00:14:04.109 "name": "spare", 00:14:04.109 "uuid": "b36e90b9-f792-5b53-8d35-839ff10347a3", 00:14:04.109 "is_configured": true, 00:14:04.109 "data_offset": 0, 00:14:04.109 "data_size": 65536 00:14:04.109 }, 00:14:04.109 { 00:14:04.109 "name": "BaseBdev2", 00:14:04.109 "uuid": "a9c032a0-de28-5307-b43e-d3b327c78738", 00:14:04.109 "is_configured": true, 00:14:04.109 "data_offset": 0, 00:14:04.109 "data_size": 65536 00:14:04.109 } 00:14:04.109 ] 00:14:04.109 }' 00:14:04.109 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.368 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.368 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.368 [2024-11-26 19:03:30.783774] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:04.368 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.368 19:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:04.368 [2024-11-26 19:03:30.920033] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:04.933 120.75 IOPS, 362.25 MiB/s [2024-11-26T19:03:31.556Z] [2024-11-26 19:03:31.288013] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:04.933 [2024-11-26 19:03:31.490909] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:04.933 [2024-11-26 19:03:31.491501] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:05.499 19:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.499 19:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.500 19:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.500 19:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.500 19:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.500 19:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.500 19:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.500 19:03:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.500 19:03:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.500 19:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.500 19:03:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.500 [2024-11-26 19:03:31.875661] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:05.500 19:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.500 "name": "raid_bdev1", 00:14:05.500 "uuid": "7536fce4-7d5d-40e0-bdc3-ebb83b6fd184", 00:14:05.500 "strip_size_kb": 0, 00:14:05.500 "state": "online", 00:14:05.500 "raid_level": "raid1", 00:14:05.500 "superblock": false, 00:14:05.500 "num_base_bdevs": 2, 00:14:05.500 "num_base_bdevs_discovered": 2, 00:14:05.500 "num_base_bdevs_operational": 2, 00:14:05.500 "process": { 00:14:05.500 "type": "rebuild", 00:14:05.500 "target": "spare", 00:14:05.500 "progress": { 00:14:05.500 "blocks": 24576, 00:14:05.500 "percent": 37 00:14:05.500 } 00:14:05.500 }, 00:14:05.500 "base_bdevs_list": [ 00:14:05.500 { 00:14:05.500 "name": "spare", 00:14:05.500 "uuid": "b36e90b9-f792-5b53-8d35-839ff10347a3", 00:14:05.500 "is_configured": true, 00:14:05.500 "data_offset": 0, 00:14:05.500 "data_size": 65536 00:14:05.500 }, 00:14:05.500 { 00:14:05.500 "name": "BaseBdev2", 00:14:05.500 "uuid": "a9c032a0-de28-5307-b43e-d3b327c78738", 00:14:05.500 "is_configured": true, 00:14:05.500 "data_offset": 0, 00:14:05.500 "data_size": 65536 00:14:05.500 } 00:14:05.500 ] 00:14:05.500 }' 00:14:05.500 19:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.500 19:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.500 19:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.500 109.00 IOPS, 327.00 MiB/s [2024-11-26T19:03:32.123Z] 19:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.500 19:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.500 [2024-11-26 19:03:32.012675] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:06.433 [2024-11-26 19:03:32.796252] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:06.433 98.33 IOPS, 295.00 MiB/s [2024-11-26T19:03:33.056Z] 19:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:06.433 19:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.433 19:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.433 19:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.433 19:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.433 19:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.433 19:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.433 19:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.433 19:03:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.433 19:03:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.433 19:03:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.433 19:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.433 "name": "raid_bdev1", 00:14:06.433 "uuid": "7536fce4-7d5d-40e0-bdc3-ebb83b6fd184", 00:14:06.433 "strip_size_kb": 0, 00:14:06.433 "state": "online", 00:14:06.433 "raid_level": "raid1", 00:14:06.433 "superblock": false, 00:14:06.433 "num_base_bdevs": 2, 00:14:06.433 "num_base_bdevs_discovered": 2, 00:14:06.433 "num_base_bdevs_operational": 2, 00:14:06.433 "process": { 00:14:06.433 "type": "rebuild", 00:14:06.433 "target": "spare", 00:14:06.433 "progress": { 00:14:06.433 "blocks": 43008, 00:14:06.433 "percent": 65 00:14:06.433 } 00:14:06.433 }, 00:14:06.433 "base_bdevs_list": [ 00:14:06.433 { 00:14:06.433 "name": "spare", 00:14:06.433 "uuid": "b36e90b9-f792-5b53-8d35-839ff10347a3", 00:14:06.433 "is_configured": true, 00:14:06.433 "data_offset": 0, 00:14:06.433 "data_size": 65536 00:14:06.433 }, 00:14:06.433 { 00:14:06.433 "name": "BaseBdev2", 00:14:06.433 "uuid": "a9c032a0-de28-5307-b43e-d3b327c78738", 00:14:06.433 "is_configured": true, 00:14:06.433 "data_offset": 0, 00:14:06.433 "data_size": 65536 00:14:06.433 } 00:14:06.433 ] 00:14:06.433 }' 00:14:06.433 19:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.691 19:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.691 19:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.691 19:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.691 19:03:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:07.258 [2024-11-26 19:03:33.704372] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:07.258 [2024-11-26 19:03:33.814637] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:07.775 89.14 IOPS, 267.43 MiB/s [2024-11-26T19:03:34.398Z] 19:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:07.775 19:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.775 19:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.775 19:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.776 19:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.776 19:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.776 19:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.776 19:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.776 19:03:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.776 19:03:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.776 [2024-11-26 19:03:34.160004] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:07.776 19:03:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.776 19:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.776 "name": "raid_bdev1", 00:14:07.776 "uuid": "7536fce4-7d5d-40e0-bdc3-ebb83b6fd184", 00:14:07.776 "strip_size_kb": 0, 00:14:07.776 "state": "online", 00:14:07.776 "raid_level": "raid1", 00:14:07.776 "superblock": false, 00:14:07.776 "num_base_bdevs": 2, 00:14:07.776 "num_base_bdevs_discovered": 2, 00:14:07.776 "num_base_bdevs_operational": 2, 00:14:07.776 "process": { 00:14:07.776 "type": "rebuild", 00:14:07.776 "target": "spare", 00:14:07.776 "progress": { 00:14:07.776 "blocks": 65536, 00:14:07.776 "percent": 100 00:14:07.776 } 00:14:07.776 }, 00:14:07.776 "base_bdevs_list": [ 00:14:07.776 { 00:14:07.776 "name": "spare", 00:14:07.776 "uuid": "b36e90b9-f792-5b53-8d35-839ff10347a3", 00:14:07.776 "is_configured": true, 00:14:07.776 "data_offset": 0, 00:14:07.776 "data_size": 65536 00:14:07.776 }, 00:14:07.776 { 00:14:07.776 "name": "BaseBdev2", 00:14:07.776 "uuid": "a9c032a0-de28-5307-b43e-d3b327c78738", 00:14:07.776 "is_configured": true, 00:14:07.776 "data_offset": 0, 00:14:07.776 "data_size": 65536 00:14:07.776 } 00:14:07.776 ] 00:14:07.776 }' 00:14:07.776 19:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.776 19:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.776 19:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.776 [2024-11-26 19:03:34.259960] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:07.776 [2024-11-26 19:03:34.264136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.776 19:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.776 19:03:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:08.908 82.75 IOPS, 248.25 MiB/s [2024-11-26T19:03:35.531Z] 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.908 "name": "raid_bdev1", 00:14:08.908 "uuid": "7536fce4-7d5d-40e0-bdc3-ebb83b6fd184", 00:14:08.908 "strip_size_kb": 0, 00:14:08.908 "state": "online", 00:14:08.908 "raid_level": "raid1", 00:14:08.908 "superblock": false, 00:14:08.908 "num_base_bdevs": 2, 00:14:08.908 "num_base_bdevs_discovered": 2, 00:14:08.908 "num_base_bdevs_operational": 2, 00:14:08.908 "base_bdevs_list": [ 00:14:08.908 { 00:14:08.908 "name": "spare", 00:14:08.908 "uuid": "b36e90b9-f792-5b53-8d35-839ff10347a3", 00:14:08.908 "is_configured": true, 00:14:08.908 "data_offset": 0, 00:14:08.908 "data_size": 65536 00:14:08.908 }, 00:14:08.908 { 00:14:08.908 "name": "BaseBdev2", 00:14:08.908 "uuid": "a9c032a0-de28-5307-b43e-d3b327c78738", 00:14:08.908 "is_configured": true, 00:14:08.908 "data_offset": 0, 00:14:08.908 "data_size": 65536 00:14:08.908 } 00:14:08.908 ] 00:14:08.908 }' 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.908 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.908 "name": "raid_bdev1", 00:14:08.908 "uuid": "7536fce4-7d5d-40e0-bdc3-ebb83b6fd184", 00:14:08.908 "strip_size_kb": 0, 00:14:08.908 "state": "online", 00:14:08.908 "raid_level": "raid1", 00:14:08.908 "superblock": false, 00:14:08.908 "num_base_bdevs": 2, 00:14:08.909 "num_base_bdevs_discovered": 2, 00:14:08.909 "num_base_bdevs_operational": 2, 00:14:08.909 "base_bdevs_list": [ 00:14:08.909 { 00:14:08.909 "name": "spare", 00:14:08.909 "uuid": "b36e90b9-f792-5b53-8d35-839ff10347a3", 00:14:08.909 "is_configured": true, 00:14:08.909 "data_offset": 0, 00:14:08.909 "data_size": 65536 00:14:08.909 }, 00:14:08.909 { 00:14:08.909 "name": "BaseBdev2", 00:14:08.909 "uuid": "a9c032a0-de28-5307-b43e-d3b327c78738", 00:14:08.909 "is_configured": true, 00:14:08.909 "data_offset": 0, 00:14:08.909 "data_size": 65536 00:14:08.909 } 00:14:08.909 ] 00:14:08.909 }' 00:14:09.167 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.167 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:09.167 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.167 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:09.167 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:09.167 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.167 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.167 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.167 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.167 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:09.167 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.167 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.167 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.167 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.167 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.167 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.167 19:03:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.167 19:03:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.167 19:03:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.167 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.167 "name": "raid_bdev1", 00:14:09.167 "uuid": "7536fce4-7d5d-40e0-bdc3-ebb83b6fd184", 00:14:09.167 "strip_size_kb": 0, 00:14:09.167 "state": "online", 00:14:09.167 "raid_level": "raid1", 00:14:09.167 "superblock": false, 00:14:09.167 "num_base_bdevs": 2, 00:14:09.167 "num_base_bdevs_discovered": 2, 00:14:09.167 "num_base_bdevs_operational": 2, 00:14:09.167 "base_bdevs_list": [ 00:14:09.167 { 00:14:09.167 "name": "spare", 00:14:09.167 "uuid": "b36e90b9-f792-5b53-8d35-839ff10347a3", 00:14:09.167 "is_configured": true, 00:14:09.167 "data_offset": 0, 00:14:09.167 "data_size": 65536 00:14:09.167 }, 00:14:09.167 { 00:14:09.167 "name": "BaseBdev2", 00:14:09.167 "uuid": "a9c032a0-de28-5307-b43e-d3b327c78738", 00:14:09.167 "is_configured": true, 00:14:09.167 "data_offset": 0, 00:14:09.167 "data_size": 65536 00:14:09.167 } 00:14:09.167 ] 00:14:09.167 }' 00:14:09.167 19:03:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.167 19:03:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.684 76.89 IOPS, 230.67 MiB/s [2024-11-26T19:03:36.307Z] 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:09.684 19:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.684 19:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.684 [2024-11-26 19:03:36.185897] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:09.684 [2024-11-26 19:03:36.185974] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:09.684 00:14:09.684 Latency(us) 00:14:09.684 [2024-11-26T19:03:36.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.684 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:09.684 raid_bdev1 : 9.35 74.86 224.58 0.00 0.00 18084.37 314.65 122969.37 00:14:09.684 [2024-11-26T19:03:36.307Z] =================================================================================================================== 00:14:09.684 [2024-11-26T19:03:36.307Z] Total : 74.86 224.58 0.00 0.00 18084.37 314.65 122969.37 00:14:09.684 [2024-11-26 19:03:36.287981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:09.684 [2024-11-26 19:03:36.288145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.684 [2024-11-26 19:03:36.288279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:09.684 [2024-11-26 19:03:36.288344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:09.684 { 00:14:09.684 "results": [ 00:14:09.684 { 00:14:09.684 "job": "raid_bdev1", 00:14:09.684 "core_mask": "0x1", 00:14:09.684 "workload": "randrw", 00:14:09.684 "percentage": 50, 00:14:09.684 "status": "finished", 00:14:09.684 "queue_depth": 2, 00:14:09.684 "io_size": 3145728, 00:14:09.684 "runtime": 9.350659, 00:14:09.684 "iops": 74.86103385868311, 00:14:09.684 "mibps": 224.58310157604933, 00:14:09.684 "io_failed": 0, 00:14:09.684 "io_timeout": 0, 00:14:09.684 "avg_latency_us": 18084.371948051947, 00:14:09.684 "min_latency_us": 314.6472727272727, 00:14:09.684 "max_latency_us": 122969.36727272728 00:14:09.684 } 00:14:09.684 ], 00:14:09.684 "core_count": 1 00:14:09.684 } 00:14:09.684 19:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.684 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.684 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:09.684 19:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.684 19:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.943 19:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.943 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:09.943 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:09.943 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:09.943 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:09.943 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:09.943 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:09.943 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:09.943 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:09.943 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:09.943 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:09.943 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:09.943 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:09.943 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:10.202 /dev/nbd0 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:10.202 1+0 records in 00:14:10.202 1+0 records out 00:14:10.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420436 s, 9.7 MB/s 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:10.202 19:03:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:10.489 /dev/nbd1 00:14:10.489 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:10.489 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:10.489 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:10.489 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:10.489 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:10.490 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:10.490 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:10.490 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:10.490 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:10.490 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:10.490 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:10.490 1+0 records in 00:14:10.490 1+0 records out 00:14:10.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477301 s, 8.6 MB/s 00:14:10.490 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.490 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:10.490 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.490 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:10.490 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:10.490 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:10.490 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:10.490 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:10.748 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:10.748 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:10.748 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:10.748 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:10.748 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:10.748 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:10.748 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:11.007 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:11.007 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:11.007 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:11.007 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.007 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.007 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:11.007 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:11.007 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.007 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:11.007 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:11.007 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:11.007 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:11.007 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:11.007 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.007 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:11.574 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:11.574 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:11.574 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:11.574 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.574 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.574 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:11.574 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:11.574 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.574 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:11.574 19:03:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 77087 00:14:11.574 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 77087 ']' 00:14:11.574 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 77087 00:14:11.574 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:11.574 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:11.574 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77087 00:14:11.574 killing process with pid 77087 00:14:11.574 Received shutdown signal, test time was about 11.035542 seconds 00:14:11.574 00:14:11.574 Latency(us) 00:14:11.574 [2024-11-26T19:03:38.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.574 [2024-11-26T19:03:38.197Z] =================================================================================================================== 00:14:11.574 [2024-11-26T19:03:38.197Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:11.574 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:11.574 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:11.574 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77087' 00:14:11.574 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 77087 00:14:11.574 [2024-11-26 19:03:37.950705] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:11.574 19:03:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 77087 00:14:11.574 [2024-11-26 19:03:38.181076] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:12.950 19:03:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:12.951 ************************************ 00:14:12.951 END TEST raid_rebuild_test_io 00:14:12.951 ************************************ 00:14:12.951 00:14:12.951 real 0m14.639s 00:14:12.951 user 0m18.999s 00:14:12.951 sys 0m1.633s 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.951 19:03:39 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:14:12.951 19:03:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:12.951 19:03:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.951 19:03:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:12.951 ************************************ 00:14:12.951 START TEST raid_rebuild_test_sb_io 00:14:12.951 ************************************ 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:12.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77491 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77491 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77491 ']' 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.951 19:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.209 [2024-11-26 19:03:39.622484] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:14:13.209 [2024-11-26 19:03:39.622925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:13.209 Zero copy mechanism will not be used. 00:14:13.209 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77491 ] 00:14:13.209 [2024-11-26 19:03:39.815670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.467 [2024-11-26 19:03:39.973601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.727 [2024-11-26 19:03:40.205511] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:13.727 [2024-11-26 19:03:40.205596] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.323 BaseBdev1_malloc 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.323 [2024-11-26 19:03:40.707280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:14.323 [2024-11-26 19:03:40.707418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.323 [2024-11-26 19:03:40.707457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:14.323 [2024-11-26 19:03:40.707481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.323 [2024-11-26 19:03:40.710534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.323 [2024-11-26 19:03:40.710589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:14.323 BaseBdev1 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.323 BaseBdev2_malloc 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.323 [2024-11-26 19:03:40.769105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:14.323 [2024-11-26 19:03:40.769197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.323 [2024-11-26 19:03:40.769237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:14.323 [2024-11-26 19:03:40.769261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.323 [2024-11-26 19:03:40.772391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.323 [2024-11-26 19:03:40.772459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:14.323 BaseBdev2 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.323 spare_malloc 00:14:14.323 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.324 spare_delay 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.324 [2024-11-26 19:03:40.848517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:14.324 [2024-11-26 19:03:40.848622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.324 [2024-11-26 19:03:40.848658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:14.324 [2024-11-26 19:03:40.848695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.324 [2024-11-26 19:03:40.851945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.324 [2024-11-26 19:03:40.852004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:14.324 spare 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.324 [2024-11-26 19:03:40.860721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:14.324 [2024-11-26 19:03:40.863590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:14.324 [2024-11-26 19:03:40.864046] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:14.324 [2024-11-26 19:03:40.864201] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:14.324 [2024-11-26 19:03:40.864616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:14.324 [2024-11-26 19:03:40.865029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:14.324 [2024-11-26 19:03:40.865184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:14.324 [2024-11-26 19:03:40.865606] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.324 "name": "raid_bdev1", 00:14:14.324 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:14.324 "strip_size_kb": 0, 00:14:14.324 "state": "online", 00:14:14.324 "raid_level": "raid1", 00:14:14.324 "superblock": true, 00:14:14.324 "num_base_bdevs": 2, 00:14:14.324 "num_base_bdevs_discovered": 2, 00:14:14.324 "num_base_bdevs_operational": 2, 00:14:14.324 "base_bdevs_list": [ 00:14:14.324 { 00:14:14.324 "name": "BaseBdev1", 00:14:14.324 "uuid": "081826c4-9fa7-5df7-af86-7bf9f77a8aba", 00:14:14.324 "is_configured": true, 00:14:14.324 "data_offset": 2048, 00:14:14.324 "data_size": 63488 00:14:14.324 }, 00:14:14.324 { 00:14:14.324 "name": "BaseBdev2", 00:14:14.324 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:14.324 "is_configured": true, 00:14:14.324 "data_offset": 2048, 00:14:14.324 "data_size": 63488 00:14:14.324 } 00:14:14.324 ] 00:14:14.324 }' 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.324 19:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.892 [2024-11-26 19:03:41.378150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.892 [2024-11-26 19:03:41.465804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.892 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.150 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.150 "name": "raid_bdev1", 00:14:15.150 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:15.150 "strip_size_kb": 0, 00:14:15.150 "state": "online", 00:14:15.150 "raid_level": "raid1", 00:14:15.150 "superblock": true, 00:14:15.150 "num_base_bdevs": 2, 00:14:15.150 "num_base_bdevs_discovered": 1, 00:14:15.150 "num_base_bdevs_operational": 1, 00:14:15.150 "base_bdevs_list": [ 00:14:15.150 { 00:14:15.150 "name": null, 00:14:15.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.150 "is_configured": false, 00:14:15.150 "data_offset": 0, 00:14:15.150 "data_size": 63488 00:14:15.150 }, 00:14:15.150 { 00:14:15.150 "name": "BaseBdev2", 00:14:15.150 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:15.150 "is_configured": true, 00:14:15.150 "data_offset": 2048, 00:14:15.150 "data_size": 63488 00:14:15.150 } 00:14:15.150 ] 00:14:15.150 }' 00:14:15.150 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.150 19:03:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.150 [2024-11-26 19:03:41.603542] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:15.150 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:15.150 Zero copy mechanism will not be used. 00:14:15.150 Running I/O for 60 seconds... 00:14:15.409 19:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:15.409 19:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.409 19:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.409 [2024-11-26 19:03:42.017032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:15.669 19:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.669 19:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:15.669 [2024-11-26 19:03:42.102546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:15.669 [2024-11-26 19:03:42.105480] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:15.669 [2024-11-26 19:03:42.234329] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:15.669 [2024-11-26 19:03:42.235269] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:15.928 [2024-11-26 19:03:42.439429] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:15.928 [2024-11-26 19:03:42.439961] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:16.186 143.00 IOPS, 429.00 MiB/s [2024-11-26T19:03:42.809Z] [2024-11-26 19:03:42.692427] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:16.445 [2024-11-26 19:03:42.957807] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:16.445 [2024-11-26 19:03:42.958736] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:16.703 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.703 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.703 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.703 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.703 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.703 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.703 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.703 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.703 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.703 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.703 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.703 "name": "raid_bdev1", 00:14:16.703 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:16.703 "strip_size_kb": 0, 00:14:16.703 "state": "online", 00:14:16.703 "raid_level": "raid1", 00:14:16.703 "superblock": true, 00:14:16.703 "num_base_bdevs": 2, 00:14:16.703 "num_base_bdevs_discovered": 2, 00:14:16.703 "num_base_bdevs_operational": 2, 00:14:16.703 "process": { 00:14:16.703 "type": "rebuild", 00:14:16.703 "target": "spare", 00:14:16.703 "progress": { 00:14:16.703 "blocks": 10240, 00:14:16.703 "percent": 16 00:14:16.703 } 00:14:16.703 }, 00:14:16.703 "base_bdevs_list": [ 00:14:16.703 { 00:14:16.703 "name": "spare", 00:14:16.703 "uuid": "88c4a980-1a6b-5850-8050-152c7bca5a19", 00:14:16.703 "is_configured": true, 00:14:16.703 "data_offset": 2048, 00:14:16.703 "data_size": 63488 00:14:16.703 }, 00:14:16.703 { 00:14:16.703 "name": "BaseBdev2", 00:14:16.703 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:16.703 "is_configured": true, 00:14:16.703 "data_offset": 2048, 00:14:16.703 "data_size": 63488 00:14:16.703 } 00:14:16.703 ] 00:14:16.703 }' 00:14:16.703 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.703 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.703 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.703 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.703 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:16.703 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.703 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.703 [2024-11-26 19:03:43.246037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.703 [2024-11-26 19:03:43.314677] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:16.703 [2024-11-26 19:03:43.315658] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:16.963 [2024-11-26 19:03:43.324670] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:16.963 [2024-11-26 19:03:43.335629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.963 [2024-11-26 19:03:43.335710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.963 [2024-11-26 19:03:43.335730] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:16.963 [2024-11-26 19:03:43.382482] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:16.963 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.963 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:16.963 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.963 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.963 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.963 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.963 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:16.963 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.963 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.963 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.963 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.963 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.963 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.963 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.963 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.963 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.963 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.963 "name": "raid_bdev1", 00:14:16.963 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:16.963 "strip_size_kb": 0, 00:14:16.963 "state": "online", 00:14:16.963 "raid_level": "raid1", 00:14:16.963 "superblock": true, 00:14:16.963 "num_base_bdevs": 2, 00:14:16.963 "num_base_bdevs_discovered": 1, 00:14:16.963 "num_base_bdevs_operational": 1, 00:14:16.963 "base_bdevs_list": [ 00:14:16.963 { 00:14:16.963 "name": null, 00:14:16.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.963 "is_configured": false, 00:14:16.963 "data_offset": 0, 00:14:16.963 "data_size": 63488 00:14:16.963 }, 00:14:16.963 { 00:14:16.963 "name": "BaseBdev2", 00:14:16.963 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:16.963 "is_configured": true, 00:14:16.963 "data_offset": 2048, 00:14:16.963 "data_size": 63488 00:14:16.963 } 00:14:16.963 ] 00:14:16.963 }' 00:14:16.963 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.963 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.481 132.00 IOPS, 396.00 MiB/s [2024-11-26T19:03:44.104Z] 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:17.481 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.481 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:17.481 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:17.481 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.481 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.481 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.481 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.481 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.481 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.481 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.481 "name": "raid_bdev1", 00:14:17.481 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:17.481 "strip_size_kb": 0, 00:14:17.481 "state": "online", 00:14:17.481 "raid_level": "raid1", 00:14:17.481 "superblock": true, 00:14:17.481 "num_base_bdevs": 2, 00:14:17.481 "num_base_bdevs_discovered": 1, 00:14:17.481 "num_base_bdevs_operational": 1, 00:14:17.481 "base_bdevs_list": [ 00:14:17.481 { 00:14:17.481 "name": null, 00:14:17.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.481 "is_configured": false, 00:14:17.481 "data_offset": 0, 00:14:17.481 "data_size": 63488 00:14:17.481 }, 00:14:17.481 { 00:14:17.481 "name": "BaseBdev2", 00:14:17.481 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:17.481 "is_configured": true, 00:14:17.481 "data_offset": 2048, 00:14:17.481 "data_size": 63488 00:14:17.481 } 00:14:17.481 ] 00:14:17.481 }' 00:14:17.481 19:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.481 19:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:17.481 19:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.481 19:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:17.481 19:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:17.481 19:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.481 19:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.765 [2024-11-26 19:03:44.111100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:17.765 19:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.765 19:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:17.765 [2024-11-26 19:03:44.168185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:17.765 [2024-11-26 19:03:44.171093] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:17.765 [2024-11-26 19:03:44.309182] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:17.765 [2024-11-26 19:03:44.310102] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:18.024 [2024-11-26 19:03:44.564698] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:18.024 [2024-11-26 19:03:44.565218] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:18.590 155.33 IOPS, 466.00 MiB/s [2024-11-26T19:03:45.213Z] [2024-11-26 19:03:44.939603] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:18.590 [2024-11-26 19:03:44.940175] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:18.590 [2024-11-26 19:03:45.077161] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:18.590 [2024-11-26 19:03:45.077659] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:18.590 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.590 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.590 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.590 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.590 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.590 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.590 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.590 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.590 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.590 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.849 "name": "raid_bdev1", 00:14:18.849 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:18.849 "strip_size_kb": 0, 00:14:18.849 "state": "online", 00:14:18.849 "raid_level": "raid1", 00:14:18.849 "superblock": true, 00:14:18.849 "num_base_bdevs": 2, 00:14:18.849 "num_base_bdevs_discovered": 2, 00:14:18.849 "num_base_bdevs_operational": 2, 00:14:18.849 "process": { 00:14:18.849 "type": "rebuild", 00:14:18.849 "target": "spare", 00:14:18.849 "progress": { 00:14:18.849 "blocks": 10240, 00:14:18.849 "percent": 16 00:14:18.849 } 00:14:18.849 }, 00:14:18.849 "base_bdevs_list": [ 00:14:18.849 { 00:14:18.849 "name": "spare", 00:14:18.849 "uuid": "88c4a980-1a6b-5850-8050-152c7bca5a19", 00:14:18.849 "is_configured": true, 00:14:18.849 "data_offset": 2048, 00:14:18.849 "data_size": 63488 00:14:18.849 }, 00:14:18.849 { 00:14:18.849 "name": "BaseBdev2", 00:14:18.849 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:18.849 "is_configured": true, 00:14:18.849 "data_offset": 2048, 00:14:18.849 "data_size": 63488 00:14:18.849 } 00:14:18.849 ] 00:14:18.849 }' 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:18.849 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=463 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.849 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.849 "name": "raid_bdev1", 00:14:18.849 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:18.849 "strip_size_kb": 0, 00:14:18.850 "state": "online", 00:14:18.850 "raid_level": "raid1", 00:14:18.850 "superblock": true, 00:14:18.850 "num_base_bdevs": 2, 00:14:18.850 "num_base_bdevs_discovered": 2, 00:14:18.850 "num_base_bdevs_operational": 2, 00:14:18.850 "process": { 00:14:18.850 "type": "rebuild", 00:14:18.850 "target": "spare", 00:14:18.850 "progress": { 00:14:18.850 "blocks": 12288, 00:14:18.850 "percent": 19 00:14:18.850 } 00:14:18.850 }, 00:14:18.850 "base_bdevs_list": [ 00:14:18.850 { 00:14:18.850 "name": "spare", 00:14:18.850 "uuid": "88c4a980-1a6b-5850-8050-152c7bca5a19", 00:14:18.850 "is_configured": true, 00:14:18.850 "data_offset": 2048, 00:14:18.850 "data_size": 63488 00:14:18.850 }, 00:14:18.850 { 00:14:18.850 "name": "BaseBdev2", 00:14:18.850 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:18.850 "is_configured": true, 00:14:18.850 "data_offset": 2048, 00:14:18.850 "data_size": 63488 00:14:18.850 } 00:14:18.850 ] 00:14:18.850 }' 00:14:18.850 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.850 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.850 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.850 [2024-11-26 19:03:45.442253] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:19.108 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.108 19:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:19.367 147.50 IOPS, 442.50 MiB/s [2024-11-26T19:03:45.990Z] [2024-11-26 19:03:45.901846] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:19.626 [2024-11-26 19:03:46.115842] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:19.626 [2024-11-26 19:03:46.217714] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:19.626 [2024-11-26 19:03:46.218118] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:19.884 [2024-11-26 19:03:46.468577] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:19.884 19:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:19.884 19:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.884 19:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.884 19:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.884 19:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.884 19:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.884 19:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.884 19:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.884 19:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.884 19:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.143 19:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.143 19:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.143 "name": "raid_bdev1", 00:14:20.143 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:20.143 "strip_size_kb": 0, 00:14:20.143 "state": "online", 00:14:20.143 "raid_level": "raid1", 00:14:20.143 "superblock": true, 00:14:20.143 "num_base_bdevs": 2, 00:14:20.143 "num_base_bdevs_discovered": 2, 00:14:20.143 "num_base_bdevs_operational": 2, 00:14:20.143 "process": { 00:14:20.143 "type": "rebuild", 00:14:20.143 "target": "spare", 00:14:20.143 "progress": { 00:14:20.143 "blocks": 32768, 00:14:20.143 "percent": 51 00:14:20.143 } 00:14:20.143 }, 00:14:20.143 "base_bdevs_list": [ 00:14:20.143 { 00:14:20.143 "name": "spare", 00:14:20.143 "uuid": "88c4a980-1a6b-5850-8050-152c7bca5a19", 00:14:20.143 "is_configured": true, 00:14:20.143 "data_offset": 2048, 00:14:20.143 "data_size": 63488 00:14:20.143 }, 00:14:20.143 { 00:14:20.143 "name": "BaseBdev2", 00:14:20.143 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:20.143 "is_configured": true, 00:14:20.143 "data_offset": 2048, 00:14:20.143 "data_size": 63488 00:14:20.143 } 00:14:20.143 ] 00:14:20.143 }' 00:14:20.143 19:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.143 [2024-11-26 19:03:46.579459] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:20.143 [2024-11-26 19:03:46.579986] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:20.143 19:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.143 19:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.143 135.40 IOPS, 406.20 MiB/s [2024-11-26T19:03:46.766Z] 19:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.143 19:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:20.401 [2024-11-26 19:03:46.926740] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:20.660 [2024-11-26 19:03:47.049438] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:21.228 118.00 IOPS, 354.00 MiB/s [2024-11-26T19:03:47.851Z] 19:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:21.228 19:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.228 19:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.228 19:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.228 19:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.228 19:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.228 19:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.228 19:03:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.228 19:03:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.228 19:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.228 19:03:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.228 19:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.228 "name": "raid_bdev1", 00:14:21.228 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:21.228 "strip_size_kb": 0, 00:14:21.228 "state": "online", 00:14:21.228 "raid_level": "raid1", 00:14:21.228 "superblock": true, 00:14:21.228 "num_base_bdevs": 2, 00:14:21.228 "num_base_bdevs_discovered": 2, 00:14:21.228 "num_base_bdevs_operational": 2, 00:14:21.228 "process": { 00:14:21.228 "type": "rebuild", 00:14:21.228 "target": "spare", 00:14:21.228 "progress": { 00:14:21.228 "blocks": 49152, 00:14:21.228 "percent": 77 00:14:21.228 } 00:14:21.228 }, 00:14:21.228 "base_bdevs_list": [ 00:14:21.228 { 00:14:21.228 "name": "spare", 00:14:21.228 "uuid": "88c4a980-1a6b-5850-8050-152c7bca5a19", 00:14:21.228 "is_configured": true, 00:14:21.228 "data_offset": 2048, 00:14:21.228 "data_size": 63488 00:14:21.228 }, 00:14:21.228 { 00:14:21.228 "name": "BaseBdev2", 00:14:21.228 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:21.228 "is_configured": true, 00:14:21.228 "data_offset": 2048, 00:14:21.228 "data_size": 63488 00:14:21.228 } 00:14:21.228 ] 00:14:21.228 }' 00:14:21.228 19:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.228 19:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.228 19:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.228 19:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.228 19:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:21.870 [2024-11-26 19:03:48.200803] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:21.870 [2024-11-26 19:03:48.435468] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:22.130 [2024-11-26 19:03:48.543963] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:22.130 [2024-11-26 19:03:48.548866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.389 107.00 IOPS, 321.00 MiB/s [2024-11-26T19:03:49.012Z] 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.389 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.389 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.389 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.389 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.389 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.389 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.389 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.389 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.389 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.389 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.389 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.389 "name": "raid_bdev1", 00:14:22.389 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:22.389 "strip_size_kb": 0, 00:14:22.389 "state": "online", 00:14:22.389 "raid_level": "raid1", 00:14:22.389 "superblock": true, 00:14:22.389 "num_base_bdevs": 2, 00:14:22.389 "num_base_bdevs_discovered": 2, 00:14:22.389 "num_base_bdevs_operational": 2, 00:14:22.389 "base_bdevs_list": [ 00:14:22.389 { 00:14:22.389 "name": "spare", 00:14:22.389 "uuid": "88c4a980-1a6b-5850-8050-152c7bca5a19", 00:14:22.389 "is_configured": true, 00:14:22.389 "data_offset": 2048, 00:14:22.389 "data_size": 63488 00:14:22.389 }, 00:14:22.389 { 00:14:22.389 "name": "BaseBdev2", 00:14:22.389 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:22.389 "is_configured": true, 00:14:22.389 "data_offset": 2048, 00:14:22.389 "data_size": 63488 00:14:22.389 } 00:14:22.389 ] 00:14:22.389 }' 00:14:22.389 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.389 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:22.389 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.389 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:22.389 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:22.389 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:22.389 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.389 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:22.389 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:22.389 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.389 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.390 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.390 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.390 19:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.651 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.652 "name": "raid_bdev1", 00:14:22.652 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:22.652 "strip_size_kb": 0, 00:14:22.652 "state": "online", 00:14:22.652 "raid_level": "raid1", 00:14:22.652 "superblock": true, 00:14:22.652 "num_base_bdevs": 2, 00:14:22.652 "num_base_bdevs_discovered": 2, 00:14:22.652 "num_base_bdevs_operational": 2, 00:14:22.652 "base_bdevs_list": [ 00:14:22.652 { 00:14:22.652 "name": "spare", 00:14:22.652 "uuid": "88c4a980-1a6b-5850-8050-152c7bca5a19", 00:14:22.652 "is_configured": true, 00:14:22.652 "data_offset": 2048, 00:14:22.652 "data_size": 63488 00:14:22.652 }, 00:14:22.652 { 00:14:22.652 "name": "BaseBdev2", 00:14:22.652 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:22.652 "is_configured": true, 00:14:22.652 "data_offset": 2048, 00:14:22.652 "data_size": 63488 00:14:22.652 } 00:14:22.652 ] 00:14:22.652 }' 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.652 "name": "raid_bdev1", 00:14:22.652 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:22.652 "strip_size_kb": 0, 00:14:22.652 "state": "online", 00:14:22.652 "raid_level": "raid1", 00:14:22.652 "superblock": true, 00:14:22.652 "num_base_bdevs": 2, 00:14:22.652 "num_base_bdevs_discovered": 2, 00:14:22.652 "num_base_bdevs_operational": 2, 00:14:22.652 "base_bdevs_list": [ 00:14:22.652 { 00:14:22.652 "name": "spare", 00:14:22.652 "uuid": "88c4a980-1a6b-5850-8050-152c7bca5a19", 00:14:22.652 "is_configured": true, 00:14:22.652 "data_offset": 2048, 00:14:22.652 "data_size": 63488 00:14:22.652 }, 00:14:22.652 { 00:14:22.652 "name": "BaseBdev2", 00:14:22.652 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:22.652 "is_configured": true, 00:14:22.652 "data_offset": 2048, 00:14:22.652 "data_size": 63488 00:14:22.652 } 00:14:22.652 ] 00:14:22.652 }' 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.652 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.220 98.25 IOPS, 294.75 MiB/s [2024-11-26T19:03:49.843Z] 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:23.220 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.220 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.220 [2024-11-26 19:03:49.670404] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:23.220 [2024-11-26 19:03:49.670446] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:23.220 00:14:23.221 Latency(us) 00:14:23.221 [2024-11-26T19:03:49.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.221 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:23.221 raid_bdev1 : 8.16 97.39 292.16 0.00 0.00 14201.10 283.00 120586.24 00:14:23.221 [2024-11-26T19:03:49.844Z] =================================================================================================================== 00:14:23.221 [2024-11-26T19:03:49.844Z] Total : 97.39 292.16 0.00 0.00 14201.10 283.00 120586.24 00:14:23.221 [2024-11-26 19:03:49.791927] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:23.221 { 00:14:23.221 "results": [ 00:14:23.221 { 00:14:23.221 "job": "raid_bdev1", 00:14:23.221 "core_mask": "0x1", 00:14:23.221 "workload": "randrw", 00:14:23.221 "percentage": 50, 00:14:23.221 "status": "finished", 00:14:23.221 "queue_depth": 2, 00:14:23.221 "io_size": 3145728, 00:14:23.221 "runtime": 8.163337, 00:14:23.221 "iops": 97.3866447017929, 00:14:23.221 "mibps": 292.15993410537874, 00:14:23.221 "io_failed": 0, 00:14:23.221 "io_timeout": 0, 00:14:23.221 "avg_latency_us": 14201.098977701544, 00:14:23.221 "min_latency_us": 282.99636363636364, 00:14:23.221 "max_latency_us": 120586.24 00:14:23.221 } 00:14:23.221 ], 00:14:23.221 "core_count": 1 00:14:23.221 } 00:14:23.221 [2024-11-26 19:03:49.792261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.221 [2024-11-26 19:03:49.792449] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:23.221 [2024-11-26 19:03:49.792473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:23.221 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.221 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.221 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:23.221 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.221 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.221 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.481 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:23.481 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:23.481 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:23.481 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:23.481 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:23.481 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:23.481 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:23.481 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:23.482 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:23.482 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:23.482 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:23.482 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:23.482 19:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:23.740 /dev/nbd0 00:14:23.740 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:23.740 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:23.740 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:23.740 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:23.740 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:23.740 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:23.740 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:23.740 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:23.740 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:23.740 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:23.740 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.740 1+0 records in 00:14:23.740 1+0 records out 00:14:23.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393547 s, 10.4 MB/s 00:14:23.740 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.740 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:23.740 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.740 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:23.740 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:23.740 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:23.741 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:23.741 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:23.741 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:23.741 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:23.741 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:23.741 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:23.741 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:23.741 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:23.741 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:23.741 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:23.741 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:23.741 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:23.741 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:23.999 /dev/nbd1 00:14:23.999 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:23.999 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:23.999 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:23.999 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:23.999 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:23.999 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:23.999 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:23.999 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:23.999 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:23.999 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:23.999 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.999 1+0 records in 00:14:23.999 1+0 records out 00:14:23.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474298 s, 8.6 MB/s 00:14:23.999 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.999 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:23.999 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.999 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:23.999 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:23.999 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:23.999 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:23.999 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:24.257 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:24.257 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.257 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:24.257 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:24.257 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:24.257 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.257 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:24.516 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:24.516 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:24.516 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:24.516 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:24.516 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:24.516 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:24.516 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:24.516 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.516 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:24.516 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.516 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:24.516 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:24.516 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:24.516 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.516 19:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:24.774 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:24.774 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:24.774 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:24.774 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:24.774 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:24.774 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:24.774 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:24.774 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.774 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:24.774 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:24.774 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.774 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.774 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.774 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:24.774 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.774 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.774 [2024-11-26 19:03:51.255571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:24.774 [2024-11-26 19:03:51.255856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.774 [2024-11-26 19:03:51.256032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:24.774 [2024-11-26 19:03:51.256193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.774 [2024-11-26 19:03:51.259677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.774 [2024-11-26 19:03:51.259738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:24.774 [2024-11-26 19:03:51.259898] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:24.774 [2024-11-26 19:03:51.259988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:24.774 [2024-11-26 19:03:51.260255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:24.774 spare 00:14:24.774 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.774 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:24.774 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.774 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.774 [2024-11-26 19:03:51.360438] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:24.774 [2024-11-26 19:03:51.360532] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:24.774 [2024-11-26 19:03:51.361031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:14:24.774 [2024-11-26 19:03:51.361388] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:24.774 [2024-11-26 19:03:51.361409] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:24.774 [2024-11-26 19:03:51.361697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.774 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.774 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:24.775 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.775 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.775 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.775 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.775 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:24.775 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.775 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.775 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.775 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.775 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.775 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.775 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.775 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.775 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.033 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.033 "name": "raid_bdev1", 00:14:25.033 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:25.033 "strip_size_kb": 0, 00:14:25.033 "state": "online", 00:14:25.033 "raid_level": "raid1", 00:14:25.033 "superblock": true, 00:14:25.033 "num_base_bdevs": 2, 00:14:25.033 "num_base_bdevs_discovered": 2, 00:14:25.033 "num_base_bdevs_operational": 2, 00:14:25.033 "base_bdevs_list": [ 00:14:25.033 { 00:14:25.033 "name": "spare", 00:14:25.033 "uuid": "88c4a980-1a6b-5850-8050-152c7bca5a19", 00:14:25.033 "is_configured": true, 00:14:25.033 "data_offset": 2048, 00:14:25.033 "data_size": 63488 00:14:25.033 }, 00:14:25.033 { 00:14:25.033 "name": "BaseBdev2", 00:14:25.033 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:25.033 "is_configured": true, 00:14:25.033 "data_offset": 2048, 00:14:25.033 "data_size": 63488 00:14:25.033 } 00:14:25.033 ] 00:14:25.033 }' 00:14:25.033 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.033 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.600 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:25.600 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.600 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:25.600 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:25.600 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.600 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.600 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.600 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.600 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.600 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.600 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.600 "name": "raid_bdev1", 00:14:25.600 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:25.600 "strip_size_kb": 0, 00:14:25.600 "state": "online", 00:14:25.600 "raid_level": "raid1", 00:14:25.600 "superblock": true, 00:14:25.600 "num_base_bdevs": 2, 00:14:25.600 "num_base_bdevs_discovered": 2, 00:14:25.600 "num_base_bdevs_operational": 2, 00:14:25.600 "base_bdevs_list": [ 00:14:25.600 { 00:14:25.600 "name": "spare", 00:14:25.600 "uuid": "88c4a980-1a6b-5850-8050-152c7bca5a19", 00:14:25.600 "is_configured": true, 00:14:25.600 "data_offset": 2048, 00:14:25.600 "data_size": 63488 00:14:25.600 }, 00:14:25.600 { 00:14:25.600 "name": "BaseBdev2", 00:14:25.600 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:25.600 "is_configured": true, 00:14:25.600 "data_offset": 2048, 00:14:25.600 "data_size": 63488 00:14:25.600 } 00:14:25.600 ] 00:14:25.600 }' 00:14:25.600 19:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.600 [2024-11-26 19:03:52.136506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.600 "name": "raid_bdev1", 00:14:25.600 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:25.600 "strip_size_kb": 0, 00:14:25.600 "state": "online", 00:14:25.600 "raid_level": "raid1", 00:14:25.600 "superblock": true, 00:14:25.600 "num_base_bdevs": 2, 00:14:25.600 "num_base_bdevs_discovered": 1, 00:14:25.600 "num_base_bdevs_operational": 1, 00:14:25.600 "base_bdevs_list": [ 00:14:25.600 { 00:14:25.600 "name": null, 00:14:25.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.600 "is_configured": false, 00:14:25.600 "data_offset": 0, 00:14:25.600 "data_size": 63488 00:14:25.600 }, 00:14:25.600 { 00:14:25.600 "name": "BaseBdev2", 00:14:25.600 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:25.600 "is_configured": true, 00:14:25.600 "data_offset": 2048, 00:14:25.600 "data_size": 63488 00:14:25.600 } 00:14:25.600 ] 00:14:25.600 }' 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.600 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.167 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:26.167 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.167 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.167 [2024-11-26 19:03:52.668812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.167 [2024-11-26 19:03:52.669205] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:26.167 [2024-11-26 19:03:52.669244] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:26.167 [2024-11-26 19:03:52.669328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.167 [2024-11-26 19:03:52.688264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:14:26.167 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.167 19:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:26.167 [2024-11-26 19:03:52.691252] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:27.099 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.100 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.100 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.100 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.100 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.100 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.100 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.100 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.100 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.100 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.358 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.358 "name": "raid_bdev1", 00:14:27.358 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:27.358 "strip_size_kb": 0, 00:14:27.358 "state": "online", 00:14:27.358 "raid_level": "raid1", 00:14:27.358 "superblock": true, 00:14:27.358 "num_base_bdevs": 2, 00:14:27.358 "num_base_bdevs_discovered": 2, 00:14:27.358 "num_base_bdevs_operational": 2, 00:14:27.358 "process": { 00:14:27.358 "type": "rebuild", 00:14:27.358 "target": "spare", 00:14:27.358 "progress": { 00:14:27.358 "blocks": 20480, 00:14:27.358 "percent": 32 00:14:27.358 } 00:14:27.358 }, 00:14:27.358 "base_bdevs_list": [ 00:14:27.358 { 00:14:27.358 "name": "spare", 00:14:27.358 "uuid": "88c4a980-1a6b-5850-8050-152c7bca5a19", 00:14:27.358 "is_configured": true, 00:14:27.358 "data_offset": 2048, 00:14:27.358 "data_size": 63488 00:14:27.358 }, 00:14:27.358 { 00:14:27.358 "name": "BaseBdev2", 00:14:27.359 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:27.359 "is_configured": true, 00:14:27.359 "data_offset": 2048, 00:14:27.359 "data_size": 63488 00:14:27.359 } 00:14:27.359 ] 00:14:27.359 }' 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.359 [2024-11-26 19:03:53.861527] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.359 [2024-11-26 19:03:53.904229] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:27.359 [2024-11-26 19:03:53.904359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.359 [2024-11-26 19:03:53.904392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.359 [2024-11-26 19:03:53.904414] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.359 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.617 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.617 "name": "raid_bdev1", 00:14:27.617 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:27.617 "strip_size_kb": 0, 00:14:27.617 "state": "online", 00:14:27.617 "raid_level": "raid1", 00:14:27.617 "superblock": true, 00:14:27.617 "num_base_bdevs": 2, 00:14:27.617 "num_base_bdevs_discovered": 1, 00:14:27.617 "num_base_bdevs_operational": 1, 00:14:27.617 "base_bdevs_list": [ 00:14:27.617 { 00:14:27.617 "name": null, 00:14:27.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.617 "is_configured": false, 00:14:27.617 "data_offset": 0, 00:14:27.617 "data_size": 63488 00:14:27.617 }, 00:14:27.617 { 00:14:27.617 "name": "BaseBdev2", 00:14:27.617 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:27.617 "is_configured": true, 00:14:27.617 "data_offset": 2048, 00:14:27.617 "data_size": 63488 00:14:27.617 } 00:14:27.617 ] 00:14:27.617 }' 00:14:27.617 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.617 19:03:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.876 19:03:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:27.876 19:03:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.876 19:03:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.876 [2024-11-26 19:03:54.485043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:27.876 [2024-11-26 19:03:54.485339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.876 [2024-11-26 19:03:54.485389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:27.876 [2024-11-26 19:03:54.485413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.876 [2024-11-26 19:03:54.486180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.876 [2024-11-26 19:03:54.486224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:27.876 [2024-11-26 19:03:54.486396] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:27.876 [2024-11-26 19:03:54.486428] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:27.876 [2024-11-26 19:03:54.486453] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:27.876 [2024-11-26 19:03:54.486492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.134 [2024-11-26 19:03:54.504265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:14:28.134 spare 00:14:28.134 19:03:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.134 19:03:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:28.134 [2024-11-26 19:03:54.507189] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:29.071 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.071 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.071 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.071 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.071 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.071 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.071 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.071 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.071 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.071 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.072 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.072 "name": "raid_bdev1", 00:14:29.072 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:29.072 "strip_size_kb": 0, 00:14:29.072 "state": "online", 00:14:29.072 "raid_level": "raid1", 00:14:29.072 "superblock": true, 00:14:29.072 "num_base_bdevs": 2, 00:14:29.072 "num_base_bdevs_discovered": 2, 00:14:29.072 "num_base_bdevs_operational": 2, 00:14:29.072 "process": { 00:14:29.072 "type": "rebuild", 00:14:29.072 "target": "spare", 00:14:29.072 "progress": { 00:14:29.072 "blocks": 18432, 00:14:29.072 "percent": 29 00:14:29.072 } 00:14:29.072 }, 00:14:29.072 "base_bdevs_list": [ 00:14:29.072 { 00:14:29.072 "name": "spare", 00:14:29.072 "uuid": "88c4a980-1a6b-5850-8050-152c7bca5a19", 00:14:29.072 "is_configured": true, 00:14:29.072 "data_offset": 2048, 00:14:29.072 "data_size": 63488 00:14:29.072 }, 00:14:29.072 { 00:14:29.072 "name": "BaseBdev2", 00:14:29.072 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:29.072 "is_configured": true, 00:14:29.072 "data_offset": 2048, 00:14:29.072 "data_size": 63488 00:14:29.072 } 00:14:29.072 ] 00:14:29.072 }' 00:14:29.072 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.072 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.072 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.072 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.072 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:29.072 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.072 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.072 [2024-11-26 19:03:55.670011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.331 [2024-11-26 19:03:55.720179] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:29.331 [2024-11-26 19:03:55.720389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.331 [2024-11-26 19:03:55.720460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.331 [2024-11-26 19:03:55.720484] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:29.331 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.331 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:29.331 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.331 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.331 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.331 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.331 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:29.331 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.331 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.331 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.331 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.331 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.331 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.331 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.331 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.331 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.331 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.331 "name": "raid_bdev1", 00:14:29.331 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:29.331 "strip_size_kb": 0, 00:14:29.331 "state": "online", 00:14:29.331 "raid_level": "raid1", 00:14:29.331 "superblock": true, 00:14:29.331 "num_base_bdevs": 2, 00:14:29.331 "num_base_bdevs_discovered": 1, 00:14:29.331 "num_base_bdevs_operational": 1, 00:14:29.331 "base_bdevs_list": [ 00:14:29.331 { 00:14:29.331 "name": null, 00:14:29.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.331 "is_configured": false, 00:14:29.331 "data_offset": 0, 00:14:29.331 "data_size": 63488 00:14:29.331 }, 00:14:29.331 { 00:14:29.331 "name": "BaseBdev2", 00:14:29.331 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:29.331 "is_configured": true, 00:14:29.331 "data_offset": 2048, 00:14:29.331 "data_size": 63488 00:14:29.331 } 00:14:29.331 ] 00:14:29.331 }' 00:14:29.331 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.331 19:03:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.898 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:29.898 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.898 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:29.898 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:29.898 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.898 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.898 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.898 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.898 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.898 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.898 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.898 "name": "raid_bdev1", 00:14:29.898 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:29.898 "strip_size_kb": 0, 00:14:29.898 "state": "online", 00:14:29.898 "raid_level": "raid1", 00:14:29.898 "superblock": true, 00:14:29.898 "num_base_bdevs": 2, 00:14:29.898 "num_base_bdevs_discovered": 1, 00:14:29.898 "num_base_bdevs_operational": 1, 00:14:29.898 "base_bdevs_list": [ 00:14:29.898 { 00:14:29.898 "name": null, 00:14:29.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.898 "is_configured": false, 00:14:29.898 "data_offset": 0, 00:14:29.898 "data_size": 63488 00:14:29.898 }, 00:14:29.898 { 00:14:29.898 "name": "BaseBdev2", 00:14:29.898 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:29.898 "is_configured": true, 00:14:29.898 "data_offset": 2048, 00:14:29.898 "data_size": 63488 00:14:29.898 } 00:14:29.898 ] 00:14:29.898 }' 00:14:29.898 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.898 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:29.898 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.898 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:29.898 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:29.898 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.898 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.898 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.898 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:29.898 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.898 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.898 [2024-11-26 19:03:56.479878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:29.898 [2024-11-26 19:03:56.479983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.898 [2024-11-26 19:03:56.480034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:29.898 [2024-11-26 19:03:56.480055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.898 [2024-11-26 19:03:56.480731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.898 [2024-11-26 19:03:56.480769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:29.898 [2024-11-26 19:03:56.480924] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:29.899 [2024-11-26 19:03:56.480951] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:29.899 [2024-11-26 19:03:56.480978] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:29.899 [2024-11-26 19:03:56.480996] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:29.899 BaseBdev1 00:14:29.899 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.899 19:03:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:31.274 19:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:31.274 19:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.274 19:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.274 19:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.274 19:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.274 19:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:31.274 19:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.274 19:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.274 19:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.274 19:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.274 19:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.274 19:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.274 19:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.274 19:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.274 19:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.274 19:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.274 "name": "raid_bdev1", 00:14:31.274 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:31.274 "strip_size_kb": 0, 00:14:31.274 "state": "online", 00:14:31.274 "raid_level": "raid1", 00:14:31.274 "superblock": true, 00:14:31.274 "num_base_bdevs": 2, 00:14:31.274 "num_base_bdevs_discovered": 1, 00:14:31.274 "num_base_bdevs_operational": 1, 00:14:31.274 "base_bdevs_list": [ 00:14:31.274 { 00:14:31.274 "name": null, 00:14:31.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.274 "is_configured": false, 00:14:31.274 "data_offset": 0, 00:14:31.274 "data_size": 63488 00:14:31.274 }, 00:14:31.274 { 00:14:31.274 "name": "BaseBdev2", 00:14:31.274 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:31.274 "is_configured": true, 00:14:31.274 "data_offset": 2048, 00:14:31.274 "data_size": 63488 00:14:31.274 } 00:14:31.274 ] 00:14:31.274 }' 00:14:31.274 19:03:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.274 19:03:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.533 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:31.533 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.533 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:31.533 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:31.533 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.533 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.533 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.533 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.533 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.533 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.533 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.533 "name": "raid_bdev1", 00:14:31.533 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:31.533 "strip_size_kb": 0, 00:14:31.533 "state": "online", 00:14:31.533 "raid_level": "raid1", 00:14:31.533 "superblock": true, 00:14:31.533 "num_base_bdevs": 2, 00:14:31.533 "num_base_bdevs_discovered": 1, 00:14:31.533 "num_base_bdevs_operational": 1, 00:14:31.533 "base_bdevs_list": [ 00:14:31.533 { 00:14:31.533 "name": null, 00:14:31.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.533 "is_configured": false, 00:14:31.533 "data_offset": 0, 00:14:31.533 "data_size": 63488 00:14:31.533 }, 00:14:31.533 { 00:14:31.533 "name": "BaseBdev2", 00:14:31.533 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:31.533 "is_configured": true, 00:14:31.533 "data_offset": 2048, 00:14:31.533 "data_size": 63488 00:14:31.533 } 00:14:31.533 ] 00:14:31.533 }' 00:14:31.533 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.533 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:31.533 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.795 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:31.795 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:31.795 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:31.795 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:31.795 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:31.795 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.795 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:31.795 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.795 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:31.795 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.795 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.795 [2024-11-26 19:03:58.168706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:31.795 [2024-11-26 19:03:58.168995] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:31.795 [2024-11-26 19:03:58.169029] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:31.795 request: 00:14:31.795 { 00:14:31.795 "base_bdev": "BaseBdev1", 00:14:31.795 "raid_bdev": "raid_bdev1", 00:14:31.795 "method": "bdev_raid_add_base_bdev", 00:14:31.795 "req_id": 1 00:14:31.795 } 00:14:31.795 Got JSON-RPC error response 00:14:31.795 response: 00:14:31.795 { 00:14:31.795 "code": -22, 00:14:31.795 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:31.795 } 00:14:31.795 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:31.795 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:31.795 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:31.795 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:31.795 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:31.795 19:03:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:32.731 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:32.731 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.731 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.731 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.731 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.731 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:32.731 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.731 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.731 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.731 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.731 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.731 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.731 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.731 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.731 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.732 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.732 "name": "raid_bdev1", 00:14:32.732 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:32.732 "strip_size_kb": 0, 00:14:32.732 "state": "online", 00:14:32.732 "raid_level": "raid1", 00:14:32.732 "superblock": true, 00:14:32.732 "num_base_bdevs": 2, 00:14:32.732 "num_base_bdevs_discovered": 1, 00:14:32.732 "num_base_bdevs_operational": 1, 00:14:32.732 "base_bdevs_list": [ 00:14:32.732 { 00:14:32.732 "name": null, 00:14:32.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.732 "is_configured": false, 00:14:32.732 "data_offset": 0, 00:14:32.732 "data_size": 63488 00:14:32.732 }, 00:14:32.732 { 00:14:32.732 "name": "BaseBdev2", 00:14:32.732 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:32.732 "is_configured": true, 00:14:32.732 "data_offset": 2048, 00:14:32.732 "data_size": 63488 00:14:32.732 } 00:14:32.732 ] 00:14:32.732 }' 00:14:32.732 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.732 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.299 "name": "raid_bdev1", 00:14:33.299 "uuid": "5fae5c02-8519-4613-ba58-9cd63d4dfa5a", 00:14:33.299 "strip_size_kb": 0, 00:14:33.299 "state": "online", 00:14:33.299 "raid_level": "raid1", 00:14:33.299 "superblock": true, 00:14:33.299 "num_base_bdevs": 2, 00:14:33.299 "num_base_bdevs_discovered": 1, 00:14:33.299 "num_base_bdevs_operational": 1, 00:14:33.299 "base_bdevs_list": [ 00:14:33.299 { 00:14:33.299 "name": null, 00:14:33.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.299 "is_configured": false, 00:14:33.299 "data_offset": 0, 00:14:33.299 "data_size": 63488 00:14:33.299 }, 00:14:33.299 { 00:14:33.299 "name": "BaseBdev2", 00:14:33.299 "uuid": "2244f13f-fb9c-5bd0-9d88-aa015af760b9", 00:14:33.299 "is_configured": true, 00:14:33.299 "data_offset": 2048, 00:14:33.299 "data_size": 63488 00:14:33.299 } 00:14:33.299 ] 00:14:33.299 }' 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77491 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77491 ']' 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77491 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77491 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:33.299 killing process with pid 77491 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77491' 00:14:33.299 Received shutdown signal, test time was about 18.266559 seconds 00:14:33.299 00:14:33.299 Latency(us) 00:14:33.299 [2024-11-26T19:03:59.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.299 [2024-11-26T19:03:59.922Z] =================================================================================================================== 00:14:33.299 [2024-11-26T19:03:59.922Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77491 00:14:33.299 [2024-11-26 19:03:59.873468] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:33.299 19:03:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77491 00:14:33.299 [2024-11-26 19:03:59.873686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:33.299 [2024-11-26 19:03:59.873774] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:33.299 [2024-11-26 19:03:59.873807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:33.557 [2024-11-26 19:04:00.109092] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:34.929 00:14:34.929 real 0m21.912s 00:14:34.929 user 0m29.610s 00:14:34.929 sys 0m2.165s 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.929 ************************************ 00:14:34.929 END TEST raid_rebuild_test_sb_io 00:14:34.929 ************************************ 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.929 19:04:01 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:34.929 19:04:01 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:14:34.929 19:04:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:34.929 19:04:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.929 19:04:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:34.929 ************************************ 00:14:34.929 START TEST raid_rebuild_test 00:14:34.929 ************************************ 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=78197 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 78197 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 78197 ']' 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.929 19:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.188 [2024-11-26 19:04:01.571025] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:14:35.188 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:35.188 Zero copy mechanism will not be used. 00:14:35.188 [2024-11-26 19:04:01.571208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78197 ] 00:14:35.188 [2024-11-26 19:04:01.768605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.446 [2024-11-26 19:04:01.961969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.704 [2024-11-26 19:04:02.235822] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.704 [2024-11-26 19:04:02.235928] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.272 BaseBdev1_malloc 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.272 [2024-11-26 19:04:02.698889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:36.272 [2024-11-26 19:04:02.698960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.272 [2024-11-26 19:04:02.698992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:36.272 [2024-11-26 19:04:02.699011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.272 [2024-11-26 19:04:02.702113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.272 [2024-11-26 19:04:02.702159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:36.272 BaseBdev1 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.272 BaseBdev2_malloc 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.272 [2024-11-26 19:04:02.768457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:36.272 [2024-11-26 19:04:02.768547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.272 [2024-11-26 19:04:02.768582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:36.272 [2024-11-26 19:04:02.768600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.272 [2024-11-26 19:04:02.771589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.272 [2024-11-26 19:04:02.771636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:36.272 BaseBdev2 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.272 BaseBdev3_malloc 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.272 [2024-11-26 19:04:02.839055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:36.272 [2024-11-26 19:04:02.839129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.272 [2024-11-26 19:04:02.839161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:36.272 [2024-11-26 19:04:02.839180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.272 [2024-11-26 19:04:02.842257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.272 [2024-11-26 19:04:02.842320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:36.272 BaseBdev3 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.272 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.531 BaseBdev4_malloc 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.531 [2024-11-26 19:04:02.902198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:36.531 [2024-11-26 19:04:02.902273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.531 [2024-11-26 19:04:02.902319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:36.531 [2024-11-26 19:04:02.902340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.531 [2024-11-26 19:04:02.905458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.531 [2024-11-26 19:04:02.905523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:36.531 BaseBdev4 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.531 spare_malloc 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.531 spare_delay 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.531 [2024-11-26 19:04:02.968873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:36.531 [2024-11-26 19:04:02.968978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.531 [2024-11-26 19:04:02.969006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:36.531 [2024-11-26 19:04:02.969024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.531 [2024-11-26 19:04:02.971957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.531 [2024-11-26 19:04:02.972016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:36.531 spare 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.531 [2024-11-26 19:04:02.981015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.531 [2024-11-26 19:04:02.983709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.531 [2024-11-26 19:04:02.983814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:36.531 [2024-11-26 19:04:02.983921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:36.531 [2024-11-26 19:04:02.984049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:36.531 [2024-11-26 19:04:02.984072] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:36.531 [2024-11-26 19:04:02.984426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:36.531 [2024-11-26 19:04:02.984646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:36.531 [2024-11-26 19:04:02.984670] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:36.531 [2024-11-26 19:04:02.984952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.531 19:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.531 19:04:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.531 19:04:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.531 "name": "raid_bdev1", 00:14:36.531 "uuid": "50cce380-997d-47ca-8db0-eacd8729045b", 00:14:36.531 "strip_size_kb": 0, 00:14:36.531 "state": "online", 00:14:36.531 "raid_level": "raid1", 00:14:36.531 "superblock": false, 00:14:36.531 "num_base_bdevs": 4, 00:14:36.531 "num_base_bdevs_discovered": 4, 00:14:36.531 "num_base_bdevs_operational": 4, 00:14:36.531 "base_bdevs_list": [ 00:14:36.531 { 00:14:36.531 "name": "BaseBdev1", 00:14:36.531 "uuid": "0b139353-c7e0-5c55-813d-088dd8c1fca6", 00:14:36.531 "is_configured": true, 00:14:36.531 "data_offset": 0, 00:14:36.531 "data_size": 65536 00:14:36.531 }, 00:14:36.531 { 00:14:36.531 "name": "BaseBdev2", 00:14:36.531 "uuid": "908b0907-7e2c-5d98-9486-579fb1102b63", 00:14:36.531 "is_configured": true, 00:14:36.531 "data_offset": 0, 00:14:36.531 "data_size": 65536 00:14:36.531 }, 00:14:36.531 { 00:14:36.531 "name": "BaseBdev3", 00:14:36.531 "uuid": "13df0f6b-aed6-5363-8b6d-b649cffbe3a4", 00:14:36.531 "is_configured": true, 00:14:36.531 "data_offset": 0, 00:14:36.531 "data_size": 65536 00:14:36.531 }, 00:14:36.531 { 00:14:36.531 "name": "BaseBdev4", 00:14:36.531 "uuid": "4f3eaa1d-2f7e-5f9b-b5bd-26c9f74f7117", 00:14:36.531 "is_configured": true, 00:14:36.531 "data_offset": 0, 00:14:36.531 "data_size": 65536 00:14:36.531 } 00:14:36.531 ] 00:14:36.531 }' 00:14:36.531 19:04:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.531 19:04:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.097 19:04:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:37.097 19:04:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:37.097 19:04:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.097 19:04:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.097 [2024-11-26 19:04:03.557754] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.097 19:04:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.097 19:04:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:37.097 19:04:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.097 19:04:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.097 19:04:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.097 19:04:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:37.097 19:04:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.097 19:04:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:37.097 19:04:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:37.097 19:04:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:37.097 19:04:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:37.097 19:04:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:37.097 19:04:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:37.097 19:04:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:37.098 19:04:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:37.098 19:04:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:37.098 19:04:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:37.098 19:04:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:37.098 19:04:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:37.098 19:04:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:37.098 19:04:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:37.665 [2024-11-26 19:04:03.989571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:37.665 /dev/nbd0 00:14:37.665 19:04:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:37.665 19:04:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:37.665 19:04:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:37.665 19:04:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:37.665 19:04:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:37.665 19:04:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:37.665 19:04:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:37.665 19:04:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:37.665 19:04:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:37.665 19:04:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:37.665 19:04:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:37.665 1+0 records in 00:14:37.665 1+0 records out 00:14:37.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473565 s, 8.6 MB/s 00:14:37.665 19:04:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.665 19:04:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:37.665 19:04:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.665 19:04:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:37.665 19:04:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:37.665 19:04:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:37.665 19:04:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:37.665 19:04:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:37.665 19:04:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:37.665 19:04:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:47.638 65536+0 records in 00:14:47.638 65536+0 records out 00:14:47.638 33554432 bytes (34 MB, 32 MiB) copied, 9.26249 s, 3.6 MB/s 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:47.638 [2024-11-26 19:04:13.635119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.638 [2024-11-26 19:04:13.646703] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.638 "name": "raid_bdev1", 00:14:47.638 "uuid": "50cce380-997d-47ca-8db0-eacd8729045b", 00:14:47.638 "strip_size_kb": 0, 00:14:47.638 "state": "online", 00:14:47.638 "raid_level": "raid1", 00:14:47.638 "superblock": false, 00:14:47.638 "num_base_bdevs": 4, 00:14:47.638 "num_base_bdevs_discovered": 3, 00:14:47.638 "num_base_bdevs_operational": 3, 00:14:47.638 "base_bdevs_list": [ 00:14:47.638 { 00:14:47.638 "name": null, 00:14:47.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.638 "is_configured": false, 00:14:47.638 "data_offset": 0, 00:14:47.638 "data_size": 65536 00:14:47.638 }, 00:14:47.638 { 00:14:47.638 "name": "BaseBdev2", 00:14:47.638 "uuid": "908b0907-7e2c-5d98-9486-579fb1102b63", 00:14:47.638 "is_configured": true, 00:14:47.638 "data_offset": 0, 00:14:47.638 "data_size": 65536 00:14:47.638 }, 00:14:47.638 { 00:14:47.638 "name": "BaseBdev3", 00:14:47.638 "uuid": "13df0f6b-aed6-5363-8b6d-b649cffbe3a4", 00:14:47.638 "is_configured": true, 00:14:47.638 "data_offset": 0, 00:14:47.638 "data_size": 65536 00:14:47.638 }, 00:14:47.638 { 00:14:47.638 "name": "BaseBdev4", 00:14:47.638 "uuid": "4f3eaa1d-2f7e-5f9b-b5bd-26c9f74f7117", 00:14:47.638 "is_configured": true, 00:14:47.638 "data_offset": 0, 00:14:47.638 "data_size": 65536 00:14:47.638 } 00:14:47.638 ] 00:14:47.638 }' 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.638 19:04:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.638 19:04:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:47.638 19:04:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.638 19:04:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.638 [2024-11-26 19:04:14.166973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:47.638 [2024-11-26 19:04:14.183686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:14:47.638 19:04:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.638 19:04:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:47.638 [2024-11-26 19:04:14.186533] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:48.572 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.572 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.572 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.572 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.572 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.572 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.572 19:04:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.572 19:04:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.830 "name": "raid_bdev1", 00:14:48.830 "uuid": "50cce380-997d-47ca-8db0-eacd8729045b", 00:14:48.830 "strip_size_kb": 0, 00:14:48.830 "state": "online", 00:14:48.830 "raid_level": "raid1", 00:14:48.830 "superblock": false, 00:14:48.830 "num_base_bdevs": 4, 00:14:48.830 "num_base_bdevs_discovered": 4, 00:14:48.830 "num_base_bdevs_operational": 4, 00:14:48.830 "process": { 00:14:48.830 "type": "rebuild", 00:14:48.830 "target": "spare", 00:14:48.830 "progress": { 00:14:48.830 "blocks": 18432, 00:14:48.830 "percent": 28 00:14:48.830 } 00:14:48.830 }, 00:14:48.830 "base_bdevs_list": [ 00:14:48.830 { 00:14:48.830 "name": "spare", 00:14:48.830 "uuid": "1d4712eb-d075-5428-8d02-45d6cb5460e4", 00:14:48.830 "is_configured": true, 00:14:48.830 "data_offset": 0, 00:14:48.830 "data_size": 65536 00:14:48.830 }, 00:14:48.830 { 00:14:48.830 "name": "BaseBdev2", 00:14:48.830 "uuid": "908b0907-7e2c-5d98-9486-579fb1102b63", 00:14:48.830 "is_configured": true, 00:14:48.830 "data_offset": 0, 00:14:48.830 "data_size": 65536 00:14:48.830 }, 00:14:48.830 { 00:14:48.830 "name": "BaseBdev3", 00:14:48.830 "uuid": "13df0f6b-aed6-5363-8b6d-b649cffbe3a4", 00:14:48.830 "is_configured": true, 00:14:48.830 "data_offset": 0, 00:14:48.830 "data_size": 65536 00:14:48.830 }, 00:14:48.830 { 00:14:48.830 "name": "BaseBdev4", 00:14:48.830 "uuid": "4f3eaa1d-2f7e-5f9b-b5bd-26c9f74f7117", 00:14:48.830 "is_configured": true, 00:14:48.830 "data_offset": 0, 00:14:48.830 "data_size": 65536 00:14:48.830 } 00:14:48.830 ] 00:14:48.830 }' 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.830 [2024-11-26 19:04:15.353112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.830 [2024-11-26 19:04:15.399450] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:48.830 [2024-11-26 19:04:15.399815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.830 [2024-11-26 19:04:15.399966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.830 [2024-11-26 19:04:15.400027] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.830 19:04:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.088 19:04:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.088 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.088 "name": "raid_bdev1", 00:14:49.088 "uuid": "50cce380-997d-47ca-8db0-eacd8729045b", 00:14:49.088 "strip_size_kb": 0, 00:14:49.088 "state": "online", 00:14:49.088 "raid_level": "raid1", 00:14:49.088 "superblock": false, 00:14:49.088 "num_base_bdevs": 4, 00:14:49.088 "num_base_bdevs_discovered": 3, 00:14:49.088 "num_base_bdevs_operational": 3, 00:14:49.088 "base_bdevs_list": [ 00:14:49.088 { 00:14:49.088 "name": null, 00:14:49.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.088 "is_configured": false, 00:14:49.088 "data_offset": 0, 00:14:49.088 "data_size": 65536 00:14:49.088 }, 00:14:49.088 { 00:14:49.088 "name": "BaseBdev2", 00:14:49.088 "uuid": "908b0907-7e2c-5d98-9486-579fb1102b63", 00:14:49.088 "is_configured": true, 00:14:49.088 "data_offset": 0, 00:14:49.088 "data_size": 65536 00:14:49.088 }, 00:14:49.088 { 00:14:49.088 "name": "BaseBdev3", 00:14:49.088 "uuid": "13df0f6b-aed6-5363-8b6d-b649cffbe3a4", 00:14:49.088 "is_configured": true, 00:14:49.088 "data_offset": 0, 00:14:49.088 "data_size": 65536 00:14:49.088 }, 00:14:49.088 { 00:14:49.088 "name": "BaseBdev4", 00:14:49.088 "uuid": "4f3eaa1d-2f7e-5f9b-b5bd-26c9f74f7117", 00:14:49.088 "is_configured": true, 00:14:49.088 "data_offset": 0, 00:14:49.088 "data_size": 65536 00:14:49.088 } 00:14:49.088 ] 00:14:49.088 }' 00:14:49.088 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.088 19:04:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.346 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:49.346 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.346 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:49.346 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:49.346 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.346 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.346 19:04:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.346 19:04:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.346 19:04:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.605 19:04:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.605 19:04:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.605 "name": "raid_bdev1", 00:14:49.605 "uuid": "50cce380-997d-47ca-8db0-eacd8729045b", 00:14:49.605 "strip_size_kb": 0, 00:14:49.605 "state": "online", 00:14:49.605 "raid_level": "raid1", 00:14:49.605 "superblock": false, 00:14:49.605 "num_base_bdevs": 4, 00:14:49.605 "num_base_bdevs_discovered": 3, 00:14:49.605 "num_base_bdevs_operational": 3, 00:14:49.605 "base_bdevs_list": [ 00:14:49.605 { 00:14:49.605 "name": null, 00:14:49.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.605 "is_configured": false, 00:14:49.605 "data_offset": 0, 00:14:49.605 "data_size": 65536 00:14:49.605 }, 00:14:49.605 { 00:14:49.605 "name": "BaseBdev2", 00:14:49.605 "uuid": "908b0907-7e2c-5d98-9486-579fb1102b63", 00:14:49.605 "is_configured": true, 00:14:49.605 "data_offset": 0, 00:14:49.605 "data_size": 65536 00:14:49.605 }, 00:14:49.605 { 00:14:49.605 "name": "BaseBdev3", 00:14:49.605 "uuid": "13df0f6b-aed6-5363-8b6d-b649cffbe3a4", 00:14:49.605 "is_configured": true, 00:14:49.605 "data_offset": 0, 00:14:49.605 "data_size": 65536 00:14:49.605 }, 00:14:49.605 { 00:14:49.605 "name": "BaseBdev4", 00:14:49.605 "uuid": "4f3eaa1d-2f7e-5f9b-b5bd-26c9f74f7117", 00:14:49.605 "is_configured": true, 00:14:49.605 "data_offset": 0, 00:14:49.605 "data_size": 65536 00:14:49.605 } 00:14:49.605 ] 00:14:49.605 }' 00:14:49.605 19:04:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.605 19:04:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:49.605 19:04:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.605 19:04:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:49.605 19:04:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:49.605 19:04:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.605 19:04:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.605 [2024-11-26 19:04:16.118669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:49.605 [2024-11-26 19:04:16.133139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:14:49.605 19:04:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.605 19:04:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:49.605 [2024-11-26 19:04:16.136020] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:50.540 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.540 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.540 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.540 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.540 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.540 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.540 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.540 19:04:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.540 19:04:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.540 19:04:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.799 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.799 "name": "raid_bdev1", 00:14:50.799 "uuid": "50cce380-997d-47ca-8db0-eacd8729045b", 00:14:50.799 "strip_size_kb": 0, 00:14:50.799 "state": "online", 00:14:50.799 "raid_level": "raid1", 00:14:50.799 "superblock": false, 00:14:50.799 "num_base_bdevs": 4, 00:14:50.799 "num_base_bdevs_discovered": 4, 00:14:50.799 "num_base_bdevs_operational": 4, 00:14:50.799 "process": { 00:14:50.799 "type": "rebuild", 00:14:50.799 "target": "spare", 00:14:50.799 "progress": { 00:14:50.799 "blocks": 18432, 00:14:50.799 "percent": 28 00:14:50.799 } 00:14:50.799 }, 00:14:50.800 "base_bdevs_list": [ 00:14:50.800 { 00:14:50.800 "name": "spare", 00:14:50.800 "uuid": "1d4712eb-d075-5428-8d02-45d6cb5460e4", 00:14:50.800 "is_configured": true, 00:14:50.800 "data_offset": 0, 00:14:50.800 "data_size": 65536 00:14:50.800 }, 00:14:50.800 { 00:14:50.800 "name": "BaseBdev2", 00:14:50.800 "uuid": "908b0907-7e2c-5d98-9486-579fb1102b63", 00:14:50.800 "is_configured": true, 00:14:50.800 "data_offset": 0, 00:14:50.800 "data_size": 65536 00:14:50.800 }, 00:14:50.800 { 00:14:50.800 "name": "BaseBdev3", 00:14:50.800 "uuid": "13df0f6b-aed6-5363-8b6d-b649cffbe3a4", 00:14:50.800 "is_configured": true, 00:14:50.800 "data_offset": 0, 00:14:50.800 "data_size": 65536 00:14:50.800 }, 00:14:50.800 { 00:14:50.800 "name": "BaseBdev4", 00:14:50.800 "uuid": "4f3eaa1d-2f7e-5f9b-b5bd-26c9f74f7117", 00:14:50.800 "is_configured": true, 00:14:50.800 "data_offset": 0, 00:14:50.800 "data_size": 65536 00:14:50.800 } 00:14:50.800 ] 00:14:50.800 }' 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.800 [2024-11-26 19:04:17.322791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:50.800 [2024-11-26 19:04:17.349150] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.800 "name": "raid_bdev1", 00:14:50.800 "uuid": "50cce380-997d-47ca-8db0-eacd8729045b", 00:14:50.800 "strip_size_kb": 0, 00:14:50.800 "state": "online", 00:14:50.800 "raid_level": "raid1", 00:14:50.800 "superblock": false, 00:14:50.800 "num_base_bdevs": 4, 00:14:50.800 "num_base_bdevs_discovered": 3, 00:14:50.800 "num_base_bdevs_operational": 3, 00:14:50.800 "process": { 00:14:50.800 "type": "rebuild", 00:14:50.800 "target": "spare", 00:14:50.800 "progress": { 00:14:50.800 "blocks": 24576, 00:14:50.800 "percent": 37 00:14:50.800 } 00:14:50.800 }, 00:14:50.800 "base_bdevs_list": [ 00:14:50.800 { 00:14:50.800 "name": "spare", 00:14:50.800 "uuid": "1d4712eb-d075-5428-8d02-45d6cb5460e4", 00:14:50.800 "is_configured": true, 00:14:50.800 "data_offset": 0, 00:14:50.800 "data_size": 65536 00:14:50.800 }, 00:14:50.800 { 00:14:50.800 "name": null, 00:14:50.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.800 "is_configured": false, 00:14:50.800 "data_offset": 0, 00:14:50.800 "data_size": 65536 00:14:50.800 }, 00:14:50.800 { 00:14:50.800 "name": "BaseBdev3", 00:14:50.800 "uuid": "13df0f6b-aed6-5363-8b6d-b649cffbe3a4", 00:14:50.800 "is_configured": true, 00:14:50.800 "data_offset": 0, 00:14:50.800 "data_size": 65536 00:14:50.800 }, 00:14:50.800 { 00:14:50.800 "name": "BaseBdev4", 00:14:50.800 "uuid": "4f3eaa1d-2f7e-5f9b-b5bd-26c9f74f7117", 00:14:50.800 "is_configured": true, 00:14:50.800 "data_offset": 0, 00:14:50.800 "data_size": 65536 00:14:50.800 } 00:14:50.800 ] 00:14:50.800 }' 00:14:50.800 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.059 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.059 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.059 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.059 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=495 00:14:51.059 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:51.059 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.059 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.059 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.059 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.059 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.059 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.059 19:04:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.059 19:04:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.059 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.059 19:04:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.059 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.059 "name": "raid_bdev1", 00:14:51.059 "uuid": "50cce380-997d-47ca-8db0-eacd8729045b", 00:14:51.059 "strip_size_kb": 0, 00:14:51.059 "state": "online", 00:14:51.059 "raid_level": "raid1", 00:14:51.059 "superblock": false, 00:14:51.059 "num_base_bdevs": 4, 00:14:51.059 "num_base_bdevs_discovered": 3, 00:14:51.059 "num_base_bdevs_operational": 3, 00:14:51.059 "process": { 00:14:51.059 "type": "rebuild", 00:14:51.059 "target": "spare", 00:14:51.059 "progress": { 00:14:51.059 "blocks": 26624, 00:14:51.059 "percent": 40 00:14:51.059 } 00:14:51.059 }, 00:14:51.059 "base_bdevs_list": [ 00:14:51.059 { 00:14:51.059 "name": "spare", 00:14:51.059 "uuid": "1d4712eb-d075-5428-8d02-45d6cb5460e4", 00:14:51.059 "is_configured": true, 00:14:51.059 "data_offset": 0, 00:14:51.059 "data_size": 65536 00:14:51.059 }, 00:14:51.059 { 00:14:51.059 "name": null, 00:14:51.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.059 "is_configured": false, 00:14:51.059 "data_offset": 0, 00:14:51.059 "data_size": 65536 00:14:51.059 }, 00:14:51.059 { 00:14:51.059 "name": "BaseBdev3", 00:14:51.059 "uuid": "13df0f6b-aed6-5363-8b6d-b649cffbe3a4", 00:14:51.059 "is_configured": true, 00:14:51.059 "data_offset": 0, 00:14:51.059 "data_size": 65536 00:14:51.059 }, 00:14:51.059 { 00:14:51.059 "name": "BaseBdev4", 00:14:51.059 "uuid": "4f3eaa1d-2f7e-5f9b-b5bd-26c9f74f7117", 00:14:51.059 "is_configured": true, 00:14:51.059 "data_offset": 0, 00:14:51.059 "data_size": 65536 00:14:51.059 } 00:14:51.059 ] 00:14:51.059 }' 00:14:51.059 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.059 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.059 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.059 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.059 19:04:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:52.436 19:04:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:52.436 19:04:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.436 19:04:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.436 19:04:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.436 19:04:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.436 19:04:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.436 19:04:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.436 19:04:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.436 19:04:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.436 19:04:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.436 19:04:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.436 19:04:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.436 "name": "raid_bdev1", 00:14:52.436 "uuid": "50cce380-997d-47ca-8db0-eacd8729045b", 00:14:52.436 "strip_size_kb": 0, 00:14:52.436 "state": "online", 00:14:52.436 "raid_level": "raid1", 00:14:52.436 "superblock": false, 00:14:52.436 "num_base_bdevs": 4, 00:14:52.436 "num_base_bdevs_discovered": 3, 00:14:52.436 "num_base_bdevs_operational": 3, 00:14:52.436 "process": { 00:14:52.437 "type": "rebuild", 00:14:52.437 "target": "spare", 00:14:52.437 "progress": { 00:14:52.437 "blocks": 51200, 00:14:52.437 "percent": 78 00:14:52.437 } 00:14:52.437 }, 00:14:52.437 "base_bdevs_list": [ 00:14:52.437 { 00:14:52.437 "name": "spare", 00:14:52.437 "uuid": "1d4712eb-d075-5428-8d02-45d6cb5460e4", 00:14:52.437 "is_configured": true, 00:14:52.437 "data_offset": 0, 00:14:52.437 "data_size": 65536 00:14:52.437 }, 00:14:52.437 { 00:14:52.437 "name": null, 00:14:52.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.437 "is_configured": false, 00:14:52.437 "data_offset": 0, 00:14:52.437 "data_size": 65536 00:14:52.437 }, 00:14:52.437 { 00:14:52.437 "name": "BaseBdev3", 00:14:52.437 "uuid": "13df0f6b-aed6-5363-8b6d-b649cffbe3a4", 00:14:52.437 "is_configured": true, 00:14:52.437 "data_offset": 0, 00:14:52.437 "data_size": 65536 00:14:52.437 }, 00:14:52.437 { 00:14:52.437 "name": "BaseBdev4", 00:14:52.437 "uuid": "4f3eaa1d-2f7e-5f9b-b5bd-26c9f74f7117", 00:14:52.437 "is_configured": true, 00:14:52.437 "data_offset": 0, 00:14:52.437 "data_size": 65536 00:14:52.437 } 00:14:52.437 ] 00:14:52.437 }' 00:14:52.437 19:04:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.437 19:04:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:52.437 19:04:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.437 19:04:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.437 19:04:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:53.005 [2024-11-26 19:04:19.369359] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:53.005 [2024-11-26 19:04:19.369496] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:53.005 [2024-11-26 19:04:19.369564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.264 19:04:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:53.264 19:04:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.264 19:04:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.264 19:04:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.264 19:04:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.264 19:04:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.264 19:04:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.264 19:04:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.264 19:04:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.264 19:04:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.264 19:04:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.524 19:04:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.524 "name": "raid_bdev1", 00:14:53.524 "uuid": "50cce380-997d-47ca-8db0-eacd8729045b", 00:14:53.524 "strip_size_kb": 0, 00:14:53.524 "state": "online", 00:14:53.524 "raid_level": "raid1", 00:14:53.524 "superblock": false, 00:14:53.524 "num_base_bdevs": 4, 00:14:53.524 "num_base_bdevs_discovered": 3, 00:14:53.524 "num_base_bdevs_operational": 3, 00:14:53.524 "base_bdevs_list": [ 00:14:53.524 { 00:14:53.524 "name": "spare", 00:14:53.524 "uuid": "1d4712eb-d075-5428-8d02-45d6cb5460e4", 00:14:53.524 "is_configured": true, 00:14:53.524 "data_offset": 0, 00:14:53.524 "data_size": 65536 00:14:53.524 }, 00:14:53.524 { 00:14:53.524 "name": null, 00:14:53.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.524 "is_configured": false, 00:14:53.524 "data_offset": 0, 00:14:53.524 "data_size": 65536 00:14:53.524 }, 00:14:53.524 { 00:14:53.524 "name": "BaseBdev3", 00:14:53.524 "uuid": "13df0f6b-aed6-5363-8b6d-b649cffbe3a4", 00:14:53.524 "is_configured": true, 00:14:53.524 "data_offset": 0, 00:14:53.524 "data_size": 65536 00:14:53.524 }, 00:14:53.524 { 00:14:53.524 "name": "BaseBdev4", 00:14:53.524 "uuid": "4f3eaa1d-2f7e-5f9b-b5bd-26c9f74f7117", 00:14:53.524 "is_configured": true, 00:14:53.524 "data_offset": 0, 00:14:53.524 "data_size": 65536 00:14:53.524 } 00:14:53.524 ] 00:14:53.524 }' 00:14:53.524 19:04:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.524 19:04:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:53.524 19:04:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.524 19:04:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:53.524 19:04:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:53.524 19:04:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:53.524 19:04:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.524 19:04:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:53.524 19:04:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:53.524 19:04:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.524 19:04:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.524 19:04:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.524 19:04:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.524 19:04:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.524 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.524 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.524 "name": "raid_bdev1", 00:14:53.524 "uuid": "50cce380-997d-47ca-8db0-eacd8729045b", 00:14:53.524 "strip_size_kb": 0, 00:14:53.524 "state": "online", 00:14:53.524 "raid_level": "raid1", 00:14:53.524 "superblock": false, 00:14:53.524 "num_base_bdevs": 4, 00:14:53.524 "num_base_bdevs_discovered": 3, 00:14:53.524 "num_base_bdevs_operational": 3, 00:14:53.524 "base_bdevs_list": [ 00:14:53.524 { 00:14:53.524 "name": "spare", 00:14:53.524 "uuid": "1d4712eb-d075-5428-8d02-45d6cb5460e4", 00:14:53.524 "is_configured": true, 00:14:53.524 "data_offset": 0, 00:14:53.524 "data_size": 65536 00:14:53.524 }, 00:14:53.524 { 00:14:53.524 "name": null, 00:14:53.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.524 "is_configured": false, 00:14:53.524 "data_offset": 0, 00:14:53.524 "data_size": 65536 00:14:53.524 }, 00:14:53.524 { 00:14:53.524 "name": "BaseBdev3", 00:14:53.524 "uuid": "13df0f6b-aed6-5363-8b6d-b649cffbe3a4", 00:14:53.524 "is_configured": true, 00:14:53.524 "data_offset": 0, 00:14:53.524 "data_size": 65536 00:14:53.524 }, 00:14:53.524 { 00:14:53.524 "name": "BaseBdev4", 00:14:53.524 "uuid": "4f3eaa1d-2f7e-5f9b-b5bd-26c9f74f7117", 00:14:53.524 "is_configured": true, 00:14:53.524 "data_offset": 0, 00:14:53.524 "data_size": 65536 00:14:53.524 } 00:14:53.524 ] 00:14:53.524 }' 00:14:53.524 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.524 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:53.524 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.524 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:53.524 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:53.524 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.524 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.524 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.524 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.524 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.524 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.524 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.524 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.524 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.524 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.524 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.524 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.524 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.783 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.783 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.783 "name": "raid_bdev1", 00:14:53.783 "uuid": "50cce380-997d-47ca-8db0-eacd8729045b", 00:14:53.783 "strip_size_kb": 0, 00:14:53.783 "state": "online", 00:14:53.783 "raid_level": "raid1", 00:14:53.783 "superblock": false, 00:14:53.783 "num_base_bdevs": 4, 00:14:53.783 "num_base_bdevs_discovered": 3, 00:14:53.783 "num_base_bdevs_operational": 3, 00:14:53.783 "base_bdevs_list": [ 00:14:53.783 { 00:14:53.783 "name": "spare", 00:14:53.783 "uuid": "1d4712eb-d075-5428-8d02-45d6cb5460e4", 00:14:53.783 "is_configured": true, 00:14:53.783 "data_offset": 0, 00:14:53.783 "data_size": 65536 00:14:53.783 }, 00:14:53.783 { 00:14:53.783 "name": null, 00:14:53.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.783 "is_configured": false, 00:14:53.783 "data_offset": 0, 00:14:53.783 "data_size": 65536 00:14:53.783 }, 00:14:53.783 { 00:14:53.783 "name": "BaseBdev3", 00:14:53.783 "uuid": "13df0f6b-aed6-5363-8b6d-b649cffbe3a4", 00:14:53.783 "is_configured": true, 00:14:53.783 "data_offset": 0, 00:14:53.783 "data_size": 65536 00:14:53.783 }, 00:14:53.783 { 00:14:53.783 "name": "BaseBdev4", 00:14:53.783 "uuid": "4f3eaa1d-2f7e-5f9b-b5bd-26c9f74f7117", 00:14:53.783 "is_configured": true, 00:14:53.783 "data_offset": 0, 00:14:53.783 "data_size": 65536 00:14:53.783 } 00:14:53.783 ] 00:14:53.784 }' 00:14:53.784 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.784 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.042 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:54.042 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.042 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.042 [2024-11-26 19:04:20.611331] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:54.043 [2024-11-26 19:04:20.611376] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:54.043 [2024-11-26 19:04:20.611495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.043 [2024-11-26 19:04:20.611623] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:54.043 [2024-11-26 19:04:20.611642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:54.043 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.043 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.043 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.043 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.043 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:54.043 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.043 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:54.043 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:54.043 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:54.043 19:04:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:54.043 19:04:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:54.043 19:04:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:54.043 19:04:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:54.043 19:04:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:54.043 19:04:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:54.043 19:04:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:54.043 19:04:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:54.043 19:04:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:54.043 19:04:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:54.610 /dev/nbd0 00:14:54.611 19:04:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:54.611 19:04:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:54.611 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:54.611 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:54.611 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:54.611 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:54.611 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:54.611 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:54.611 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:54.611 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:54.611 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:54.611 1+0 records in 00:14:54.611 1+0 records out 00:14:54.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422714 s, 9.7 MB/s 00:14:54.611 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.611 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:54.611 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.611 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:54.611 19:04:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:54.611 19:04:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:54.611 19:04:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:54.611 19:04:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:54.870 /dev/nbd1 00:14:54.870 19:04:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:54.870 19:04:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:54.870 19:04:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:54.870 19:04:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:54.870 19:04:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:54.870 19:04:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:54.870 19:04:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:54.870 19:04:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:54.870 19:04:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:54.870 19:04:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:54.870 19:04:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:54.870 1+0 records in 00:14:54.870 1+0 records out 00:14:54.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486078 s, 8.4 MB/s 00:14:54.870 19:04:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.870 19:04:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:54.870 19:04:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.870 19:04:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:54.870 19:04:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:54.870 19:04:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:54.870 19:04:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:54.870 19:04:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:55.129 19:04:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:55.129 19:04:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:55.129 19:04:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:55.129 19:04:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:55.129 19:04:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:55.129 19:04:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:55.129 19:04:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:55.388 19:04:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:55.388 19:04:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:55.388 19:04:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:55.388 19:04:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:55.388 19:04:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:55.388 19:04:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:55.388 19:04:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:55.388 19:04:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:55.388 19:04:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:55.388 19:04:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:55.646 19:04:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:55.905 19:04:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:55.905 19:04:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:55.905 19:04:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:55.905 19:04:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:55.905 19:04:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:55.905 19:04:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:55.905 19:04:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:55.905 19:04:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:55.905 19:04:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 78197 00:14:55.905 19:04:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 78197 ']' 00:14:55.905 19:04:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 78197 00:14:55.905 19:04:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:55.905 19:04:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:55.905 19:04:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78197 00:14:55.905 killing process with pid 78197 00:14:55.905 Received shutdown signal, test time was about 60.000000 seconds 00:14:55.905 00:14:55.905 Latency(us) 00:14:55.905 [2024-11-26T19:04:22.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.905 [2024-11-26T19:04:22.528Z] =================================================================================================================== 00:14:55.905 [2024-11-26T19:04:22.528Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:55.905 19:04:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:55.905 19:04:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:55.905 19:04:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78197' 00:14:55.905 19:04:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 78197 00:14:55.905 19:04:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 78197 00:14:55.905 [2024-11-26 19:04:22.308692] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:56.505 [2024-11-26 19:04:22.803582] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:57.444 ************************************ 00:14:57.444 END TEST raid_rebuild_test 00:14:57.444 ************************************ 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:57.444 00:14:57.444 real 0m22.554s 00:14:57.444 user 0m25.331s 00:14:57.444 sys 0m4.189s 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.444 19:04:24 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:57.444 19:04:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:57.444 19:04:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:57.444 19:04:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:57.444 ************************************ 00:14:57.444 START TEST raid_rebuild_test_sb 00:14:57.444 ************************************ 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:57.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78690 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78690 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78690 ']' 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:57.444 19:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.702 [2024-11-26 19:04:24.156986] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:14:57.702 [2024-11-26 19:04:24.157498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:57.702 Zero copy mechanism will not be used. 00:14:57.702 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78690 ] 00:14:57.960 [2024-11-26 19:04:24.337021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.961 [2024-11-26 19:04:24.516859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.219 [2024-11-26 19:04:24.745366] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.219 [2024-11-26 19:04:24.745435] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.786 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:58.786 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:58.786 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:58.786 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:58.786 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.786 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.786 BaseBdev1_malloc 00:14:58.786 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.786 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:58.786 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.786 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.786 [2024-11-26 19:04:25.285087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:58.786 [2024-11-26 19:04:25.285200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.786 [2024-11-26 19:04:25.285239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:58.786 [2024-11-26 19:04:25.285258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.786 [2024-11-26 19:04:25.288614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.786 [2024-11-26 19:04:25.288671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:58.786 BaseBdev1 00:14:58.786 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.786 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:58.786 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:58.786 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.786 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.786 BaseBdev2_malloc 00:14:58.786 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.786 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:58.786 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.786 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.786 [2024-11-26 19:04:25.337769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:58.787 [2024-11-26 19:04:25.337872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.787 [2024-11-26 19:04:25.337911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:58.787 [2024-11-26 19:04:25.337931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.787 [2024-11-26 19:04:25.341240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.787 [2024-11-26 19:04:25.341316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:58.787 BaseBdev2 00:14:58.787 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.787 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:58.787 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:58.787 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.787 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.787 BaseBdev3_malloc 00:14:58.787 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.787 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:58.787 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.787 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.787 [2024-11-26 19:04:25.405017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:58.787 [2024-11-26 19:04:25.405099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.787 [2024-11-26 19:04:25.405135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:58.787 [2024-11-26 19:04:25.405154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.047 [2024-11-26 19:04:25.408179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.047 [2024-11-26 19:04:25.408233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:59.047 BaseBdev3 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.047 BaseBdev4_malloc 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.047 [2024-11-26 19:04:25.457582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:59.047 [2024-11-26 19:04:25.457668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.047 [2024-11-26 19:04:25.457703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:59.047 [2024-11-26 19:04:25.457721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.047 [2024-11-26 19:04:25.460762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.047 [2024-11-26 19:04:25.460817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:59.047 BaseBdev4 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.047 spare_malloc 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.047 spare_delay 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.047 [2024-11-26 19:04:25.522237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:59.047 [2024-11-26 19:04:25.522364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.047 [2024-11-26 19:04:25.522402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:59.047 [2024-11-26 19:04:25.522421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.047 [2024-11-26 19:04:25.525692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.047 [2024-11-26 19:04:25.525957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:59.047 spare 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.047 [2024-11-26 19:04:25.530433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.047 [2024-11-26 19:04:25.533268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:59.047 [2024-11-26 19:04:25.533393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:59.047 [2024-11-26 19:04:25.533485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:59.047 [2024-11-26 19:04:25.533775] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:59.047 [2024-11-26 19:04:25.533801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:59.047 [2024-11-26 19:04:25.534222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:59.047 [2024-11-26 19:04:25.534522] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:59.047 [2024-11-26 19:04:25.534540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:59.047 [2024-11-26 19:04:25.534869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.047 "name": "raid_bdev1", 00:14:59.047 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:14:59.047 "strip_size_kb": 0, 00:14:59.047 "state": "online", 00:14:59.047 "raid_level": "raid1", 00:14:59.047 "superblock": true, 00:14:59.047 "num_base_bdevs": 4, 00:14:59.047 "num_base_bdevs_discovered": 4, 00:14:59.047 "num_base_bdevs_operational": 4, 00:14:59.047 "base_bdevs_list": [ 00:14:59.047 { 00:14:59.047 "name": "BaseBdev1", 00:14:59.047 "uuid": "22fc989b-2169-5557-8a5f-ea14f0a87287", 00:14:59.047 "is_configured": true, 00:14:59.047 "data_offset": 2048, 00:14:59.047 "data_size": 63488 00:14:59.047 }, 00:14:59.047 { 00:14:59.047 "name": "BaseBdev2", 00:14:59.047 "uuid": "f321b834-4276-5409-9656-6d302d2b12e6", 00:14:59.047 "is_configured": true, 00:14:59.047 "data_offset": 2048, 00:14:59.047 "data_size": 63488 00:14:59.047 }, 00:14:59.047 { 00:14:59.047 "name": "BaseBdev3", 00:14:59.047 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:14:59.047 "is_configured": true, 00:14:59.047 "data_offset": 2048, 00:14:59.047 "data_size": 63488 00:14:59.047 }, 00:14:59.047 { 00:14:59.047 "name": "BaseBdev4", 00:14:59.047 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:14:59.047 "is_configured": true, 00:14:59.047 "data_offset": 2048, 00:14:59.047 "data_size": 63488 00:14:59.047 } 00:14:59.047 ] 00:14:59.047 }' 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.047 19:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.615 [2024-11-26 19:04:26.091454] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:59.615 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:00.182 [2024-11-26 19:04:26.507154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:00.182 /dev/nbd0 00:15:00.182 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:00.182 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:00.182 19:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:00.182 19:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:00.182 19:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:00.182 19:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:00.182 19:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:00.182 19:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:00.182 19:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:00.182 19:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:00.182 19:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:00.182 1+0 records in 00:15:00.182 1+0 records out 00:15:00.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472491 s, 8.7 MB/s 00:15:00.182 19:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.182 19:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:00.182 19:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.182 19:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:00.182 19:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:00.182 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:00.182 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:00.182 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:00.182 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:00.182 19:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:10.186 63488+0 records in 00:15:10.186 63488+0 records out 00:15:10.186 32505856 bytes (33 MB, 31 MiB) copied, 8.98885 s, 3.6 MB/s 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:10.186 [2024-11-26 19:04:35.887861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.186 [2024-11-26 19:04:35.899993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.186 "name": "raid_bdev1", 00:15:10.186 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:10.186 "strip_size_kb": 0, 00:15:10.186 "state": "online", 00:15:10.186 "raid_level": "raid1", 00:15:10.186 "superblock": true, 00:15:10.186 "num_base_bdevs": 4, 00:15:10.186 "num_base_bdevs_discovered": 3, 00:15:10.186 "num_base_bdevs_operational": 3, 00:15:10.186 "base_bdevs_list": [ 00:15:10.186 { 00:15:10.186 "name": null, 00:15:10.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.186 "is_configured": false, 00:15:10.186 "data_offset": 0, 00:15:10.186 "data_size": 63488 00:15:10.186 }, 00:15:10.186 { 00:15:10.186 "name": "BaseBdev2", 00:15:10.186 "uuid": "f321b834-4276-5409-9656-6d302d2b12e6", 00:15:10.186 "is_configured": true, 00:15:10.186 "data_offset": 2048, 00:15:10.186 "data_size": 63488 00:15:10.186 }, 00:15:10.186 { 00:15:10.186 "name": "BaseBdev3", 00:15:10.186 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:10.186 "is_configured": true, 00:15:10.186 "data_offset": 2048, 00:15:10.186 "data_size": 63488 00:15:10.186 }, 00:15:10.186 { 00:15:10.186 "name": "BaseBdev4", 00:15:10.186 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:10.186 "is_configured": true, 00:15:10.186 "data_offset": 2048, 00:15:10.186 "data_size": 63488 00:15:10.186 } 00:15:10.186 ] 00:15:10.186 }' 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.186 19:04:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.186 19:04:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:10.186 19:04:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.186 19:04:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.186 [2024-11-26 19:04:36.384149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:10.186 [2024-11-26 19:04:36.398621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:15:10.186 19:04:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.186 19:04:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:10.186 [2024-11-26 19:04:36.401320] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.121 "name": "raid_bdev1", 00:15:11.121 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:11.121 "strip_size_kb": 0, 00:15:11.121 "state": "online", 00:15:11.121 "raid_level": "raid1", 00:15:11.121 "superblock": true, 00:15:11.121 "num_base_bdevs": 4, 00:15:11.121 "num_base_bdevs_discovered": 4, 00:15:11.121 "num_base_bdevs_operational": 4, 00:15:11.121 "process": { 00:15:11.121 "type": "rebuild", 00:15:11.121 "target": "spare", 00:15:11.121 "progress": { 00:15:11.121 "blocks": 20480, 00:15:11.121 "percent": 32 00:15:11.121 } 00:15:11.121 }, 00:15:11.121 "base_bdevs_list": [ 00:15:11.121 { 00:15:11.121 "name": "spare", 00:15:11.121 "uuid": "fa03e1a1-24ce-5800-a435-e8769c0efacd", 00:15:11.121 "is_configured": true, 00:15:11.121 "data_offset": 2048, 00:15:11.121 "data_size": 63488 00:15:11.121 }, 00:15:11.121 { 00:15:11.121 "name": "BaseBdev2", 00:15:11.121 "uuid": "f321b834-4276-5409-9656-6d302d2b12e6", 00:15:11.121 "is_configured": true, 00:15:11.121 "data_offset": 2048, 00:15:11.121 "data_size": 63488 00:15:11.121 }, 00:15:11.121 { 00:15:11.121 "name": "BaseBdev3", 00:15:11.121 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:11.121 "is_configured": true, 00:15:11.121 "data_offset": 2048, 00:15:11.121 "data_size": 63488 00:15:11.121 }, 00:15:11.121 { 00:15:11.121 "name": "BaseBdev4", 00:15:11.121 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:11.121 "is_configured": true, 00:15:11.121 "data_offset": 2048, 00:15:11.121 "data_size": 63488 00:15:11.121 } 00:15:11.121 ] 00:15:11.121 }' 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.121 [2024-11-26 19:04:37.587112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.121 [2024-11-26 19:04:37.611135] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:11.121 [2024-11-26 19:04:37.611582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.121 [2024-11-26 19:04:37.611732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:11.121 [2024-11-26 19:04:37.611792] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.121 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.121 "name": "raid_bdev1", 00:15:11.122 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:11.122 "strip_size_kb": 0, 00:15:11.122 "state": "online", 00:15:11.122 "raid_level": "raid1", 00:15:11.122 "superblock": true, 00:15:11.122 "num_base_bdevs": 4, 00:15:11.122 "num_base_bdevs_discovered": 3, 00:15:11.122 "num_base_bdevs_operational": 3, 00:15:11.122 "base_bdevs_list": [ 00:15:11.122 { 00:15:11.122 "name": null, 00:15:11.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.122 "is_configured": false, 00:15:11.122 "data_offset": 0, 00:15:11.122 "data_size": 63488 00:15:11.122 }, 00:15:11.122 { 00:15:11.122 "name": "BaseBdev2", 00:15:11.122 "uuid": "f321b834-4276-5409-9656-6d302d2b12e6", 00:15:11.122 "is_configured": true, 00:15:11.122 "data_offset": 2048, 00:15:11.122 "data_size": 63488 00:15:11.122 }, 00:15:11.122 { 00:15:11.122 "name": "BaseBdev3", 00:15:11.122 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:11.122 "is_configured": true, 00:15:11.122 "data_offset": 2048, 00:15:11.122 "data_size": 63488 00:15:11.122 }, 00:15:11.122 { 00:15:11.122 "name": "BaseBdev4", 00:15:11.122 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:11.122 "is_configured": true, 00:15:11.122 "data_offset": 2048, 00:15:11.122 "data_size": 63488 00:15:11.122 } 00:15:11.122 ] 00:15:11.122 }' 00:15:11.122 19:04:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.122 19:04:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.688 19:04:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:11.688 19:04:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.688 19:04:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:11.688 19:04:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:11.688 19:04:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.688 19:04:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.688 19:04:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.688 19:04:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.688 19:04:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.688 19:04:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.688 19:04:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.688 "name": "raid_bdev1", 00:15:11.688 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:11.688 "strip_size_kb": 0, 00:15:11.688 "state": "online", 00:15:11.688 "raid_level": "raid1", 00:15:11.688 "superblock": true, 00:15:11.688 "num_base_bdevs": 4, 00:15:11.688 "num_base_bdevs_discovered": 3, 00:15:11.688 "num_base_bdevs_operational": 3, 00:15:11.688 "base_bdevs_list": [ 00:15:11.688 { 00:15:11.688 "name": null, 00:15:11.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.688 "is_configured": false, 00:15:11.688 "data_offset": 0, 00:15:11.688 "data_size": 63488 00:15:11.688 }, 00:15:11.688 { 00:15:11.688 "name": "BaseBdev2", 00:15:11.688 "uuid": "f321b834-4276-5409-9656-6d302d2b12e6", 00:15:11.688 "is_configured": true, 00:15:11.688 "data_offset": 2048, 00:15:11.688 "data_size": 63488 00:15:11.688 }, 00:15:11.688 { 00:15:11.688 "name": "BaseBdev3", 00:15:11.688 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:11.688 "is_configured": true, 00:15:11.688 "data_offset": 2048, 00:15:11.688 "data_size": 63488 00:15:11.688 }, 00:15:11.688 { 00:15:11.688 "name": "BaseBdev4", 00:15:11.688 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:11.688 "is_configured": true, 00:15:11.688 "data_offset": 2048, 00:15:11.689 "data_size": 63488 00:15:11.689 } 00:15:11.689 ] 00:15:11.689 }' 00:15:11.689 19:04:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.946 19:04:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:11.946 19:04:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.946 19:04:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:11.946 19:04:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:11.946 19:04:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.946 19:04:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.947 [2024-11-26 19:04:38.400193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:11.947 [2024-11-26 19:04:38.413924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:15:11.947 19:04:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.947 19:04:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:11.947 [2024-11-26 19:04:38.416647] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:12.880 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.880 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.880 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.880 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.880 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.880 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.880 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.880 19:04:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.880 19:04:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.880 19:04:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.880 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.880 "name": "raid_bdev1", 00:15:12.880 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:12.880 "strip_size_kb": 0, 00:15:12.880 "state": "online", 00:15:12.880 "raid_level": "raid1", 00:15:12.880 "superblock": true, 00:15:12.880 "num_base_bdevs": 4, 00:15:12.880 "num_base_bdevs_discovered": 4, 00:15:12.880 "num_base_bdevs_operational": 4, 00:15:12.880 "process": { 00:15:12.880 "type": "rebuild", 00:15:12.880 "target": "spare", 00:15:12.880 "progress": { 00:15:12.880 "blocks": 20480, 00:15:12.880 "percent": 32 00:15:12.880 } 00:15:12.880 }, 00:15:12.880 "base_bdevs_list": [ 00:15:12.880 { 00:15:12.880 "name": "spare", 00:15:12.880 "uuid": "fa03e1a1-24ce-5800-a435-e8769c0efacd", 00:15:12.880 "is_configured": true, 00:15:12.880 "data_offset": 2048, 00:15:12.880 "data_size": 63488 00:15:12.880 }, 00:15:12.880 { 00:15:12.880 "name": "BaseBdev2", 00:15:12.880 "uuid": "f321b834-4276-5409-9656-6d302d2b12e6", 00:15:12.880 "is_configured": true, 00:15:12.880 "data_offset": 2048, 00:15:12.880 "data_size": 63488 00:15:12.880 }, 00:15:12.880 { 00:15:12.880 "name": "BaseBdev3", 00:15:12.880 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:12.880 "is_configured": true, 00:15:12.880 "data_offset": 2048, 00:15:12.880 "data_size": 63488 00:15:12.880 }, 00:15:12.880 { 00:15:12.880 "name": "BaseBdev4", 00:15:12.880 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:12.880 "is_configured": true, 00:15:12.880 "data_offset": 2048, 00:15:12.880 "data_size": 63488 00:15:12.880 } 00:15:12.880 ] 00:15:12.880 }' 00:15:12.880 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.138 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.138 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.138 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.138 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:13.138 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:13.138 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:13.138 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:13.138 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:13.138 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:13.138 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:13.138 19:04:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.138 19:04:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.138 [2024-11-26 19:04:39.574089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:13.138 [2024-11-26 19:04:39.727199] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:15:13.138 19:04:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.139 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:13.139 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:13.139 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.139 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.139 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.139 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.139 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.139 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.139 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.139 19:04:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.139 19:04:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.139 19:04:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.397 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.397 "name": "raid_bdev1", 00:15:13.397 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:13.397 "strip_size_kb": 0, 00:15:13.397 "state": "online", 00:15:13.397 "raid_level": "raid1", 00:15:13.397 "superblock": true, 00:15:13.397 "num_base_bdevs": 4, 00:15:13.397 "num_base_bdevs_discovered": 3, 00:15:13.397 "num_base_bdevs_operational": 3, 00:15:13.397 "process": { 00:15:13.397 "type": "rebuild", 00:15:13.397 "target": "spare", 00:15:13.397 "progress": { 00:15:13.397 "blocks": 24576, 00:15:13.397 "percent": 38 00:15:13.397 } 00:15:13.397 }, 00:15:13.397 "base_bdevs_list": [ 00:15:13.397 { 00:15:13.397 "name": "spare", 00:15:13.397 "uuid": "fa03e1a1-24ce-5800-a435-e8769c0efacd", 00:15:13.397 "is_configured": true, 00:15:13.397 "data_offset": 2048, 00:15:13.397 "data_size": 63488 00:15:13.397 }, 00:15:13.397 { 00:15:13.397 "name": null, 00:15:13.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.397 "is_configured": false, 00:15:13.397 "data_offset": 0, 00:15:13.397 "data_size": 63488 00:15:13.397 }, 00:15:13.397 { 00:15:13.397 "name": "BaseBdev3", 00:15:13.397 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:13.397 "is_configured": true, 00:15:13.397 "data_offset": 2048, 00:15:13.397 "data_size": 63488 00:15:13.397 }, 00:15:13.397 { 00:15:13.397 "name": "BaseBdev4", 00:15:13.397 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:13.397 "is_configured": true, 00:15:13.397 "data_offset": 2048, 00:15:13.397 "data_size": 63488 00:15:13.397 } 00:15:13.397 ] 00:15:13.397 }' 00:15:13.397 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.397 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.397 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.397 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.398 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=517 00:15:13.398 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:13.398 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.398 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.398 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.398 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.398 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.398 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.398 19:04:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.398 19:04:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.398 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.398 19:04:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.398 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.398 "name": "raid_bdev1", 00:15:13.398 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:13.398 "strip_size_kb": 0, 00:15:13.398 "state": "online", 00:15:13.398 "raid_level": "raid1", 00:15:13.398 "superblock": true, 00:15:13.398 "num_base_bdevs": 4, 00:15:13.398 "num_base_bdevs_discovered": 3, 00:15:13.398 "num_base_bdevs_operational": 3, 00:15:13.398 "process": { 00:15:13.398 "type": "rebuild", 00:15:13.398 "target": "spare", 00:15:13.398 "progress": { 00:15:13.398 "blocks": 26624, 00:15:13.398 "percent": 41 00:15:13.398 } 00:15:13.398 }, 00:15:13.398 "base_bdevs_list": [ 00:15:13.398 { 00:15:13.398 "name": "spare", 00:15:13.398 "uuid": "fa03e1a1-24ce-5800-a435-e8769c0efacd", 00:15:13.398 "is_configured": true, 00:15:13.398 "data_offset": 2048, 00:15:13.398 "data_size": 63488 00:15:13.398 }, 00:15:13.398 { 00:15:13.398 "name": null, 00:15:13.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.398 "is_configured": false, 00:15:13.398 "data_offset": 0, 00:15:13.398 "data_size": 63488 00:15:13.398 }, 00:15:13.398 { 00:15:13.398 "name": "BaseBdev3", 00:15:13.398 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:13.398 "is_configured": true, 00:15:13.398 "data_offset": 2048, 00:15:13.398 "data_size": 63488 00:15:13.398 }, 00:15:13.398 { 00:15:13.398 "name": "BaseBdev4", 00:15:13.398 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:13.398 "is_configured": true, 00:15:13.398 "data_offset": 2048, 00:15:13.398 "data_size": 63488 00:15:13.398 } 00:15:13.398 ] 00:15:13.398 }' 00:15:13.398 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.398 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.398 19:04:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.656 19:04:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.656 19:04:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:14.590 19:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:14.590 19:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.590 19:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.590 19:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.590 19:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.590 19:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.590 19:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.590 19:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.590 19:04:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.590 19:04:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.590 19:04:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.590 19:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.590 "name": "raid_bdev1", 00:15:14.590 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:14.590 "strip_size_kb": 0, 00:15:14.590 "state": "online", 00:15:14.590 "raid_level": "raid1", 00:15:14.590 "superblock": true, 00:15:14.590 "num_base_bdevs": 4, 00:15:14.590 "num_base_bdevs_discovered": 3, 00:15:14.590 "num_base_bdevs_operational": 3, 00:15:14.590 "process": { 00:15:14.590 "type": "rebuild", 00:15:14.590 "target": "spare", 00:15:14.590 "progress": { 00:15:14.590 "blocks": 49152, 00:15:14.590 "percent": 77 00:15:14.590 } 00:15:14.590 }, 00:15:14.590 "base_bdevs_list": [ 00:15:14.590 { 00:15:14.590 "name": "spare", 00:15:14.590 "uuid": "fa03e1a1-24ce-5800-a435-e8769c0efacd", 00:15:14.590 "is_configured": true, 00:15:14.590 "data_offset": 2048, 00:15:14.590 "data_size": 63488 00:15:14.590 }, 00:15:14.590 { 00:15:14.590 "name": null, 00:15:14.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.590 "is_configured": false, 00:15:14.590 "data_offset": 0, 00:15:14.590 "data_size": 63488 00:15:14.590 }, 00:15:14.590 { 00:15:14.590 "name": "BaseBdev3", 00:15:14.590 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:14.590 "is_configured": true, 00:15:14.590 "data_offset": 2048, 00:15:14.590 "data_size": 63488 00:15:14.590 }, 00:15:14.590 { 00:15:14.590 "name": "BaseBdev4", 00:15:14.590 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:14.590 "is_configured": true, 00:15:14.590 "data_offset": 2048, 00:15:14.590 "data_size": 63488 00:15:14.590 } 00:15:14.590 ] 00:15:14.590 }' 00:15:14.590 19:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.590 19:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.590 19:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.590 19:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.590 19:04:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:15.156 [2024-11-26 19:04:41.647147] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:15.156 [2024-11-26 19:04:41.647277] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:15.156 [2024-11-26 19:04:41.647522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.770 "name": "raid_bdev1", 00:15:15.770 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:15.770 "strip_size_kb": 0, 00:15:15.770 "state": "online", 00:15:15.770 "raid_level": "raid1", 00:15:15.770 "superblock": true, 00:15:15.770 "num_base_bdevs": 4, 00:15:15.770 "num_base_bdevs_discovered": 3, 00:15:15.770 "num_base_bdevs_operational": 3, 00:15:15.770 "base_bdevs_list": [ 00:15:15.770 { 00:15:15.770 "name": "spare", 00:15:15.770 "uuid": "fa03e1a1-24ce-5800-a435-e8769c0efacd", 00:15:15.770 "is_configured": true, 00:15:15.770 "data_offset": 2048, 00:15:15.770 "data_size": 63488 00:15:15.770 }, 00:15:15.770 { 00:15:15.770 "name": null, 00:15:15.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.770 "is_configured": false, 00:15:15.770 "data_offset": 0, 00:15:15.770 "data_size": 63488 00:15:15.770 }, 00:15:15.770 { 00:15:15.770 "name": "BaseBdev3", 00:15:15.770 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:15.770 "is_configured": true, 00:15:15.770 "data_offset": 2048, 00:15:15.770 "data_size": 63488 00:15:15.770 }, 00:15:15.770 { 00:15:15.770 "name": "BaseBdev4", 00:15:15.770 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:15.770 "is_configured": true, 00:15:15.770 "data_offset": 2048, 00:15:15.770 "data_size": 63488 00:15:15.770 } 00:15:15.770 ] 00:15:15.770 }' 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.770 19:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.029 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.029 "name": "raid_bdev1", 00:15:16.029 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:16.029 "strip_size_kb": 0, 00:15:16.029 "state": "online", 00:15:16.029 "raid_level": "raid1", 00:15:16.029 "superblock": true, 00:15:16.029 "num_base_bdevs": 4, 00:15:16.029 "num_base_bdevs_discovered": 3, 00:15:16.029 "num_base_bdevs_operational": 3, 00:15:16.029 "base_bdevs_list": [ 00:15:16.029 { 00:15:16.029 "name": "spare", 00:15:16.029 "uuid": "fa03e1a1-24ce-5800-a435-e8769c0efacd", 00:15:16.029 "is_configured": true, 00:15:16.029 "data_offset": 2048, 00:15:16.029 "data_size": 63488 00:15:16.029 }, 00:15:16.029 { 00:15:16.029 "name": null, 00:15:16.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.029 "is_configured": false, 00:15:16.029 "data_offset": 0, 00:15:16.029 "data_size": 63488 00:15:16.029 }, 00:15:16.029 { 00:15:16.029 "name": "BaseBdev3", 00:15:16.029 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:16.029 "is_configured": true, 00:15:16.029 "data_offset": 2048, 00:15:16.029 "data_size": 63488 00:15:16.029 }, 00:15:16.029 { 00:15:16.029 "name": "BaseBdev4", 00:15:16.029 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:16.029 "is_configured": true, 00:15:16.029 "data_offset": 2048, 00:15:16.029 "data_size": 63488 00:15:16.029 } 00:15:16.029 ] 00:15:16.029 }' 00:15:16.029 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.029 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:16.029 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.029 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:16.029 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:16.029 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.029 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.029 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.029 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.029 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.029 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.029 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.029 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.030 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.030 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.030 19:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.030 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.030 19:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.030 19:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.030 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.030 "name": "raid_bdev1", 00:15:16.030 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:16.030 "strip_size_kb": 0, 00:15:16.030 "state": "online", 00:15:16.030 "raid_level": "raid1", 00:15:16.030 "superblock": true, 00:15:16.030 "num_base_bdevs": 4, 00:15:16.030 "num_base_bdevs_discovered": 3, 00:15:16.030 "num_base_bdevs_operational": 3, 00:15:16.030 "base_bdevs_list": [ 00:15:16.030 { 00:15:16.030 "name": "spare", 00:15:16.030 "uuid": "fa03e1a1-24ce-5800-a435-e8769c0efacd", 00:15:16.030 "is_configured": true, 00:15:16.030 "data_offset": 2048, 00:15:16.030 "data_size": 63488 00:15:16.030 }, 00:15:16.030 { 00:15:16.030 "name": null, 00:15:16.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.030 "is_configured": false, 00:15:16.030 "data_offset": 0, 00:15:16.030 "data_size": 63488 00:15:16.030 }, 00:15:16.030 { 00:15:16.030 "name": "BaseBdev3", 00:15:16.030 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:16.030 "is_configured": true, 00:15:16.030 "data_offset": 2048, 00:15:16.030 "data_size": 63488 00:15:16.030 }, 00:15:16.030 { 00:15:16.030 "name": "BaseBdev4", 00:15:16.030 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:16.030 "is_configured": true, 00:15:16.030 "data_offset": 2048, 00:15:16.030 "data_size": 63488 00:15:16.030 } 00:15:16.030 ] 00:15:16.030 }' 00:15:16.030 19:04:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.030 19:04:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.596 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:16.596 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.596 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.596 [2024-11-26 19:04:43.069860] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:16.596 [2024-11-26 19:04:43.069910] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.596 [2024-11-26 19:04:43.070043] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.596 [2024-11-26 19:04:43.070168] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.596 [2024-11-26 19:04:43.070186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:16.596 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.596 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.596 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.596 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:16.596 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.596 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.596 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:16.596 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:16.596 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:16.596 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:16.596 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:16.596 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:16.596 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:16.596 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:16.596 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:16.596 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:16.596 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:16.596 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:16.596 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:16.854 /dev/nbd0 00:15:17.113 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:17.113 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:17.113 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:17.113 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:17.113 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:17.113 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:17.113 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:17.113 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:17.113 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:17.113 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:17.113 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:17.113 1+0 records in 00:15:17.113 1+0 records out 00:15:17.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391661 s, 10.5 MB/s 00:15:17.113 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.113 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:17.113 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.113 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:17.113 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:17.113 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:17.113 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:17.113 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:17.372 /dev/nbd1 00:15:17.372 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:17.372 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:17.372 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:17.372 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:17.372 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:17.372 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:17.372 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:17.372 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:17.372 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:17.372 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:17.372 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:17.372 1+0 records in 00:15:17.372 1+0 records out 00:15:17.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483602 s, 8.5 MB/s 00:15:17.372 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.372 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:17.372 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.372 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:17.372 19:04:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:17.372 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:17.372 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:17.372 19:04:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:17.630 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:17.630 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:17.630 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:17.630 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:17.630 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:17.630 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.630 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:17.889 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:17.889 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:17.889 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:17.889 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.889 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.889 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:17.889 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:17.889 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.889 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.889 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:18.147 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:18.147 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:18.147 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:18.147 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:18.147 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:18.147 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:18.147 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:18.147 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:18.147 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:18.147 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:18.147 19:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.147 19:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.147 19:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.147 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:18.148 19:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.148 19:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.148 [2024-11-26 19:04:44.682138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:18.148 [2024-11-26 19:04:44.682488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.148 [2024-11-26 19:04:44.682564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:18.148 [2024-11-26 19:04:44.682593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.148 [2024-11-26 19:04:44.687696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.148 [2024-11-26 19:04:44.687797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:18.148 [2024-11-26 19:04:44.688040] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:18.148 [2024-11-26 19:04:44.688149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:18.148 [2024-11-26 19:04:44.688593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:18.148 spare 00:15:18.148 [2024-11-26 19:04:44.688892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:18.148 19:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.148 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:18.148 19:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.148 19:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.407 [2024-11-26 19:04:44.789121] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:18.407 [2024-11-26 19:04:44.789215] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:18.407 [2024-11-26 19:04:44.790032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:15:18.407 [2024-11-26 19:04:44.790560] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:18.407 [2024-11-26 19:04:44.790604] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:18.407 [2024-11-26 19:04:44.790993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.407 19:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.407 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:18.407 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.407 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.407 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.407 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.407 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.407 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.407 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.407 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.407 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.407 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.407 19:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.407 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.407 19:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.407 19:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.407 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.407 "name": "raid_bdev1", 00:15:18.407 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:18.407 "strip_size_kb": 0, 00:15:18.407 "state": "online", 00:15:18.407 "raid_level": "raid1", 00:15:18.407 "superblock": true, 00:15:18.407 "num_base_bdevs": 4, 00:15:18.407 "num_base_bdevs_discovered": 3, 00:15:18.407 "num_base_bdevs_operational": 3, 00:15:18.407 "base_bdevs_list": [ 00:15:18.407 { 00:15:18.407 "name": "spare", 00:15:18.407 "uuid": "fa03e1a1-24ce-5800-a435-e8769c0efacd", 00:15:18.407 "is_configured": true, 00:15:18.407 "data_offset": 2048, 00:15:18.407 "data_size": 63488 00:15:18.407 }, 00:15:18.407 { 00:15:18.407 "name": null, 00:15:18.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.407 "is_configured": false, 00:15:18.407 "data_offset": 2048, 00:15:18.407 "data_size": 63488 00:15:18.407 }, 00:15:18.407 { 00:15:18.407 "name": "BaseBdev3", 00:15:18.407 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:18.407 "is_configured": true, 00:15:18.407 "data_offset": 2048, 00:15:18.407 "data_size": 63488 00:15:18.407 }, 00:15:18.407 { 00:15:18.407 "name": "BaseBdev4", 00:15:18.407 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:18.407 "is_configured": true, 00:15:18.407 "data_offset": 2048, 00:15:18.407 "data_size": 63488 00:15:18.407 } 00:15:18.407 ] 00:15:18.407 }' 00:15:18.407 19:04:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.407 19:04:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.975 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:18.975 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.975 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:18.975 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:18.975 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.975 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.975 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.975 19:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.975 19:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.975 19:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.975 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.975 "name": "raid_bdev1", 00:15:18.975 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:18.975 "strip_size_kb": 0, 00:15:18.975 "state": "online", 00:15:18.975 "raid_level": "raid1", 00:15:18.975 "superblock": true, 00:15:18.975 "num_base_bdevs": 4, 00:15:18.975 "num_base_bdevs_discovered": 3, 00:15:18.975 "num_base_bdevs_operational": 3, 00:15:18.975 "base_bdevs_list": [ 00:15:18.975 { 00:15:18.975 "name": "spare", 00:15:18.975 "uuid": "fa03e1a1-24ce-5800-a435-e8769c0efacd", 00:15:18.975 "is_configured": true, 00:15:18.975 "data_offset": 2048, 00:15:18.975 "data_size": 63488 00:15:18.975 }, 00:15:18.975 { 00:15:18.975 "name": null, 00:15:18.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.975 "is_configured": false, 00:15:18.975 "data_offset": 2048, 00:15:18.975 "data_size": 63488 00:15:18.975 }, 00:15:18.975 { 00:15:18.975 "name": "BaseBdev3", 00:15:18.975 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:18.976 "is_configured": true, 00:15:18.976 "data_offset": 2048, 00:15:18.976 "data_size": 63488 00:15:18.976 }, 00:15:18.976 { 00:15:18.976 "name": "BaseBdev4", 00:15:18.976 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:18.976 "is_configured": true, 00:15:18.976 "data_offset": 2048, 00:15:18.976 "data_size": 63488 00:15:18.976 } 00:15:18.976 ] 00:15:18.976 }' 00:15:18.976 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.976 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:18.976 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.976 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:18.976 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.976 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:18.976 19:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.976 19:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.976 19:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.234 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.235 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:19.235 19:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.235 19:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.235 [2024-11-26 19:04:45.632709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.235 19:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.235 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:19.235 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.235 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.235 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.235 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.235 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:19.235 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.235 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.235 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.235 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.235 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.235 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.235 19:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.235 19:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.235 19:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.235 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.235 "name": "raid_bdev1", 00:15:19.235 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:19.235 "strip_size_kb": 0, 00:15:19.235 "state": "online", 00:15:19.235 "raid_level": "raid1", 00:15:19.235 "superblock": true, 00:15:19.235 "num_base_bdevs": 4, 00:15:19.235 "num_base_bdevs_discovered": 2, 00:15:19.235 "num_base_bdevs_operational": 2, 00:15:19.235 "base_bdevs_list": [ 00:15:19.235 { 00:15:19.235 "name": null, 00:15:19.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.235 "is_configured": false, 00:15:19.235 "data_offset": 0, 00:15:19.235 "data_size": 63488 00:15:19.235 }, 00:15:19.235 { 00:15:19.235 "name": null, 00:15:19.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.235 "is_configured": false, 00:15:19.235 "data_offset": 2048, 00:15:19.235 "data_size": 63488 00:15:19.235 }, 00:15:19.235 { 00:15:19.235 "name": "BaseBdev3", 00:15:19.235 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:19.235 "is_configured": true, 00:15:19.235 "data_offset": 2048, 00:15:19.235 "data_size": 63488 00:15:19.235 }, 00:15:19.235 { 00:15:19.235 "name": "BaseBdev4", 00:15:19.235 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:19.235 "is_configured": true, 00:15:19.235 "data_offset": 2048, 00:15:19.235 "data_size": 63488 00:15:19.235 } 00:15:19.235 ] 00:15:19.235 }' 00:15:19.235 19:04:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.235 19:04:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.802 19:04:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:19.802 19:04:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.802 19:04:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.802 [2024-11-26 19:04:46.208831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:19.802 [2024-11-26 19:04:46.209462] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:19.802 [2024-11-26 19:04:46.209502] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:19.802 [2024-11-26 19:04:46.209565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:19.802 [2024-11-26 19:04:46.223458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:15:19.802 19:04:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.802 19:04:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:19.802 [2024-11-26 19:04:46.227165] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:20.737 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.737 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.737 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.737 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.738 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.738 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.738 19:04:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.738 19:04:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.738 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.738 19:04:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.738 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.738 "name": "raid_bdev1", 00:15:20.738 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:20.738 "strip_size_kb": 0, 00:15:20.738 "state": "online", 00:15:20.738 "raid_level": "raid1", 00:15:20.738 "superblock": true, 00:15:20.738 "num_base_bdevs": 4, 00:15:20.738 "num_base_bdevs_discovered": 3, 00:15:20.738 "num_base_bdevs_operational": 3, 00:15:20.738 "process": { 00:15:20.738 "type": "rebuild", 00:15:20.738 "target": "spare", 00:15:20.738 "progress": { 00:15:20.738 "blocks": 20480, 00:15:20.738 "percent": 32 00:15:20.738 } 00:15:20.738 }, 00:15:20.738 "base_bdevs_list": [ 00:15:20.738 { 00:15:20.738 "name": "spare", 00:15:20.738 "uuid": "fa03e1a1-24ce-5800-a435-e8769c0efacd", 00:15:20.738 "is_configured": true, 00:15:20.738 "data_offset": 2048, 00:15:20.738 "data_size": 63488 00:15:20.738 }, 00:15:20.738 { 00:15:20.738 "name": null, 00:15:20.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.738 "is_configured": false, 00:15:20.738 "data_offset": 2048, 00:15:20.738 "data_size": 63488 00:15:20.738 }, 00:15:20.738 { 00:15:20.738 "name": "BaseBdev3", 00:15:20.738 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:20.738 "is_configured": true, 00:15:20.738 "data_offset": 2048, 00:15:20.738 "data_size": 63488 00:15:20.738 }, 00:15:20.738 { 00:15:20.738 "name": "BaseBdev4", 00:15:20.738 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:20.738 "is_configured": true, 00:15:20.738 "data_offset": 2048, 00:15:20.738 "data_size": 63488 00:15:20.738 } 00:15:20.738 ] 00:15:20.738 }' 00:15:20.738 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.738 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:20.738 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.996 [2024-11-26 19:04:47.401197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:20.996 [2024-11-26 19:04:47.439874] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:20.996 [2024-11-26 19:04:47.439998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.996 [2024-11-26 19:04:47.440032] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:20.996 [2024-11-26 19:04:47.440044] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.996 "name": "raid_bdev1", 00:15:20.996 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:20.996 "strip_size_kb": 0, 00:15:20.996 "state": "online", 00:15:20.996 "raid_level": "raid1", 00:15:20.996 "superblock": true, 00:15:20.996 "num_base_bdevs": 4, 00:15:20.996 "num_base_bdevs_discovered": 2, 00:15:20.996 "num_base_bdevs_operational": 2, 00:15:20.996 "base_bdevs_list": [ 00:15:20.996 { 00:15:20.996 "name": null, 00:15:20.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.996 "is_configured": false, 00:15:20.996 "data_offset": 0, 00:15:20.996 "data_size": 63488 00:15:20.996 }, 00:15:20.996 { 00:15:20.996 "name": null, 00:15:20.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.996 "is_configured": false, 00:15:20.996 "data_offset": 2048, 00:15:20.996 "data_size": 63488 00:15:20.996 }, 00:15:20.996 { 00:15:20.996 "name": "BaseBdev3", 00:15:20.996 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:20.996 "is_configured": true, 00:15:20.996 "data_offset": 2048, 00:15:20.996 "data_size": 63488 00:15:20.996 }, 00:15:20.996 { 00:15:20.996 "name": "BaseBdev4", 00:15:20.996 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:20.996 "is_configured": true, 00:15:20.996 "data_offset": 2048, 00:15:20.996 "data_size": 63488 00:15:20.996 } 00:15:20.996 ] 00:15:20.996 }' 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.996 19:04:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.563 19:04:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:21.563 19:04:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.563 19:04:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.563 [2024-11-26 19:04:48.026155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:21.563 [2024-11-26 19:04:48.026259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.563 [2024-11-26 19:04:48.026327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:21.563 [2024-11-26 19:04:48.026347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.563 [2024-11-26 19:04:48.027068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.563 [2024-11-26 19:04:48.027103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:21.563 [2024-11-26 19:04:48.027260] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:21.563 [2024-11-26 19:04:48.027302] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:21.563 [2024-11-26 19:04:48.027328] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:21.563 [2024-11-26 19:04:48.027370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:21.563 [2024-11-26 19:04:48.041347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:15:21.563 spare 00:15:21.563 19:04:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.563 19:04:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:21.563 [2024-11-26 19:04:48.044763] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:22.498 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.498 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.498 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.498 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.498 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.498 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.498 19:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.498 19:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.498 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.498 19:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.498 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.498 "name": "raid_bdev1", 00:15:22.498 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:22.498 "strip_size_kb": 0, 00:15:22.498 "state": "online", 00:15:22.498 "raid_level": "raid1", 00:15:22.498 "superblock": true, 00:15:22.498 "num_base_bdevs": 4, 00:15:22.498 "num_base_bdevs_discovered": 3, 00:15:22.498 "num_base_bdevs_operational": 3, 00:15:22.498 "process": { 00:15:22.498 "type": "rebuild", 00:15:22.498 "target": "spare", 00:15:22.498 "progress": { 00:15:22.498 "blocks": 18432, 00:15:22.498 "percent": 29 00:15:22.498 } 00:15:22.498 }, 00:15:22.498 "base_bdevs_list": [ 00:15:22.498 { 00:15:22.498 "name": "spare", 00:15:22.499 "uuid": "fa03e1a1-24ce-5800-a435-e8769c0efacd", 00:15:22.499 "is_configured": true, 00:15:22.499 "data_offset": 2048, 00:15:22.499 "data_size": 63488 00:15:22.499 }, 00:15:22.499 { 00:15:22.499 "name": null, 00:15:22.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.499 "is_configured": false, 00:15:22.499 "data_offset": 2048, 00:15:22.499 "data_size": 63488 00:15:22.499 }, 00:15:22.499 { 00:15:22.499 "name": "BaseBdev3", 00:15:22.499 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:22.499 "is_configured": true, 00:15:22.499 "data_offset": 2048, 00:15:22.499 "data_size": 63488 00:15:22.499 }, 00:15:22.499 { 00:15:22.499 "name": "BaseBdev4", 00:15:22.499 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:22.499 "is_configured": true, 00:15:22.499 "data_offset": 2048, 00:15:22.499 "data_size": 63488 00:15:22.499 } 00:15:22.499 ] 00:15:22.499 }' 00:15:22.499 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.760 [2024-11-26 19:04:49.206697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.760 [2024-11-26 19:04:49.257491] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:22.760 [2024-11-26 19:04:49.258810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.760 [2024-11-26 19:04:49.258968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.760 [2024-11-26 19:04:49.259028] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.760 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.760 "name": "raid_bdev1", 00:15:22.760 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:22.760 "strip_size_kb": 0, 00:15:22.760 "state": "online", 00:15:22.760 "raid_level": "raid1", 00:15:22.760 "superblock": true, 00:15:22.760 "num_base_bdevs": 4, 00:15:22.760 "num_base_bdevs_discovered": 2, 00:15:22.760 "num_base_bdevs_operational": 2, 00:15:22.760 "base_bdevs_list": [ 00:15:22.760 { 00:15:22.760 "name": null, 00:15:22.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.760 "is_configured": false, 00:15:22.760 "data_offset": 0, 00:15:22.760 "data_size": 63488 00:15:22.760 }, 00:15:22.760 { 00:15:22.760 "name": null, 00:15:22.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.760 "is_configured": false, 00:15:22.760 "data_offset": 2048, 00:15:22.760 "data_size": 63488 00:15:22.760 }, 00:15:22.760 { 00:15:22.760 "name": "BaseBdev3", 00:15:22.760 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:22.760 "is_configured": true, 00:15:22.760 "data_offset": 2048, 00:15:22.761 "data_size": 63488 00:15:22.761 }, 00:15:22.761 { 00:15:22.761 "name": "BaseBdev4", 00:15:22.761 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:22.761 "is_configured": true, 00:15:22.761 "data_offset": 2048, 00:15:22.761 "data_size": 63488 00:15:22.761 } 00:15:22.761 ] 00:15:22.761 }' 00:15:22.761 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.761 19:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.327 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:23.327 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.327 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:23.327 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:23.327 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.327 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.327 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.327 19:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.327 19:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.327 19:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.327 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.327 "name": "raid_bdev1", 00:15:23.327 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:23.327 "strip_size_kb": 0, 00:15:23.327 "state": "online", 00:15:23.327 "raid_level": "raid1", 00:15:23.327 "superblock": true, 00:15:23.327 "num_base_bdevs": 4, 00:15:23.328 "num_base_bdevs_discovered": 2, 00:15:23.328 "num_base_bdevs_operational": 2, 00:15:23.328 "base_bdevs_list": [ 00:15:23.328 { 00:15:23.328 "name": null, 00:15:23.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.328 "is_configured": false, 00:15:23.328 "data_offset": 0, 00:15:23.328 "data_size": 63488 00:15:23.328 }, 00:15:23.328 { 00:15:23.328 "name": null, 00:15:23.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.328 "is_configured": false, 00:15:23.328 "data_offset": 2048, 00:15:23.328 "data_size": 63488 00:15:23.328 }, 00:15:23.328 { 00:15:23.328 "name": "BaseBdev3", 00:15:23.328 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:23.328 "is_configured": true, 00:15:23.328 "data_offset": 2048, 00:15:23.328 "data_size": 63488 00:15:23.328 }, 00:15:23.328 { 00:15:23.328 "name": "BaseBdev4", 00:15:23.328 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:23.328 "is_configured": true, 00:15:23.328 "data_offset": 2048, 00:15:23.328 "data_size": 63488 00:15:23.328 } 00:15:23.328 ] 00:15:23.328 }' 00:15:23.328 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.328 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:23.328 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.328 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:23.328 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:23.328 19:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.328 19:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.328 19:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.328 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:23.328 19:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.328 19:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.328 [2024-11-26 19:04:49.929657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:23.328 [2024-11-26 19:04:49.929754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.328 [2024-11-26 19:04:49.929789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:23.328 [2024-11-26 19:04:49.929808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.328 [2024-11-26 19:04:49.930533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.328 [2024-11-26 19:04:49.930588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:23.328 [2024-11-26 19:04:49.930720] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:23.328 [2024-11-26 19:04:49.930755] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:23.328 [2024-11-26 19:04:49.930769] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:23.328 [2024-11-26 19:04:49.930815] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:23.328 BaseBdev1 00:15:23.328 19:04:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.328 19:04:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:24.703 19:04:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:24.703 19:04:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.703 19:04:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.703 19:04:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.703 19:04:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.703 19:04:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:24.703 19:04:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.703 19:04:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.703 19:04:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.703 19:04:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.703 19:04:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.703 19:04:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.703 19:04:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.703 19:04:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.703 19:04:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.703 19:04:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.703 "name": "raid_bdev1", 00:15:24.703 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:24.703 "strip_size_kb": 0, 00:15:24.703 "state": "online", 00:15:24.703 "raid_level": "raid1", 00:15:24.703 "superblock": true, 00:15:24.703 "num_base_bdevs": 4, 00:15:24.703 "num_base_bdevs_discovered": 2, 00:15:24.703 "num_base_bdevs_operational": 2, 00:15:24.703 "base_bdevs_list": [ 00:15:24.703 { 00:15:24.703 "name": null, 00:15:24.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.703 "is_configured": false, 00:15:24.703 "data_offset": 0, 00:15:24.703 "data_size": 63488 00:15:24.703 }, 00:15:24.703 { 00:15:24.703 "name": null, 00:15:24.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.703 "is_configured": false, 00:15:24.703 "data_offset": 2048, 00:15:24.703 "data_size": 63488 00:15:24.703 }, 00:15:24.703 { 00:15:24.703 "name": "BaseBdev3", 00:15:24.703 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:24.703 "is_configured": true, 00:15:24.703 "data_offset": 2048, 00:15:24.703 "data_size": 63488 00:15:24.703 }, 00:15:24.703 { 00:15:24.703 "name": "BaseBdev4", 00:15:24.703 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:24.703 "is_configured": true, 00:15:24.703 "data_offset": 2048, 00:15:24.703 "data_size": 63488 00:15:24.703 } 00:15:24.703 ] 00:15:24.703 }' 00:15:24.703 19:04:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.703 19:04:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.962 "name": "raid_bdev1", 00:15:24.962 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:24.962 "strip_size_kb": 0, 00:15:24.962 "state": "online", 00:15:24.962 "raid_level": "raid1", 00:15:24.962 "superblock": true, 00:15:24.962 "num_base_bdevs": 4, 00:15:24.962 "num_base_bdevs_discovered": 2, 00:15:24.962 "num_base_bdevs_operational": 2, 00:15:24.962 "base_bdevs_list": [ 00:15:24.962 { 00:15:24.962 "name": null, 00:15:24.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.962 "is_configured": false, 00:15:24.962 "data_offset": 0, 00:15:24.962 "data_size": 63488 00:15:24.962 }, 00:15:24.962 { 00:15:24.962 "name": null, 00:15:24.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.962 "is_configured": false, 00:15:24.962 "data_offset": 2048, 00:15:24.962 "data_size": 63488 00:15:24.962 }, 00:15:24.962 { 00:15:24.962 "name": "BaseBdev3", 00:15:24.962 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:24.962 "is_configured": true, 00:15:24.962 "data_offset": 2048, 00:15:24.962 "data_size": 63488 00:15:24.962 }, 00:15:24.962 { 00:15:24.962 "name": "BaseBdev4", 00:15:24.962 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:24.962 "is_configured": true, 00:15:24.962 "data_offset": 2048, 00:15:24.962 "data_size": 63488 00:15:24.962 } 00:15:24.962 ] 00:15:24.962 }' 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.962 [2024-11-26 19:04:51.574164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.962 [2024-11-26 19:04:51.574583] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:24.962 [2024-11-26 19:04:51.574632] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:24.962 request: 00:15:24.962 { 00:15:24.962 "base_bdev": "BaseBdev1", 00:15:24.962 "raid_bdev": "raid_bdev1", 00:15:24.962 "method": "bdev_raid_add_base_bdev", 00:15:24.962 "req_id": 1 00:15:24.962 } 00:15:24.962 Got JSON-RPC error response 00:15:24.962 response: 00:15:24.962 { 00:15:24.962 "code": -22, 00:15:24.962 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:24.962 } 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:24.962 19:04:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:26.338 19:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:26.338 19:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.338 19:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.338 19:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.338 19:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.338 19:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:26.338 19:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.338 19:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.338 19:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.338 19:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.338 19:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.338 19:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.338 19:04:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.338 19:04:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.338 19:04:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.338 19:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.338 "name": "raid_bdev1", 00:15:26.338 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:26.338 "strip_size_kb": 0, 00:15:26.338 "state": "online", 00:15:26.338 "raid_level": "raid1", 00:15:26.338 "superblock": true, 00:15:26.338 "num_base_bdevs": 4, 00:15:26.338 "num_base_bdevs_discovered": 2, 00:15:26.338 "num_base_bdevs_operational": 2, 00:15:26.338 "base_bdevs_list": [ 00:15:26.338 { 00:15:26.338 "name": null, 00:15:26.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.338 "is_configured": false, 00:15:26.338 "data_offset": 0, 00:15:26.338 "data_size": 63488 00:15:26.338 }, 00:15:26.338 { 00:15:26.338 "name": null, 00:15:26.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.338 "is_configured": false, 00:15:26.338 "data_offset": 2048, 00:15:26.338 "data_size": 63488 00:15:26.338 }, 00:15:26.338 { 00:15:26.338 "name": "BaseBdev3", 00:15:26.338 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:26.338 "is_configured": true, 00:15:26.338 "data_offset": 2048, 00:15:26.338 "data_size": 63488 00:15:26.338 }, 00:15:26.338 { 00:15:26.338 "name": "BaseBdev4", 00:15:26.338 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:26.338 "is_configured": true, 00:15:26.338 "data_offset": 2048, 00:15:26.338 "data_size": 63488 00:15:26.338 } 00:15:26.338 ] 00:15:26.338 }' 00:15:26.338 19:04:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.338 19:04:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.596 19:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:26.596 19:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.596 19:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:26.596 19:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:26.596 19:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.596 19:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.596 19:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.596 19:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.596 19:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.596 19:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.596 19:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.596 "name": "raid_bdev1", 00:15:26.596 "uuid": "07112977-1ae8-4992-99bd-cf908b486cd6", 00:15:26.596 "strip_size_kb": 0, 00:15:26.596 "state": "online", 00:15:26.596 "raid_level": "raid1", 00:15:26.596 "superblock": true, 00:15:26.596 "num_base_bdevs": 4, 00:15:26.596 "num_base_bdevs_discovered": 2, 00:15:26.596 "num_base_bdevs_operational": 2, 00:15:26.596 "base_bdevs_list": [ 00:15:26.596 { 00:15:26.596 "name": null, 00:15:26.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.596 "is_configured": false, 00:15:26.596 "data_offset": 0, 00:15:26.596 "data_size": 63488 00:15:26.596 }, 00:15:26.596 { 00:15:26.596 "name": null, 00:15:26.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.596 "is_configured": false, 00:15:26.596 "data_offset": 2048, 00:15:26.596 "data_size": 63488 00:15:26.596 }, 00:15:26.596 { 00:15:26.596 "name": "BaseBdev3", 00:15:26.596 "uuid": "21de47c0-a56f-532f-9ec2-d1d0f88819bb", 00:15:26.596 "is_configured": true, 00:15:26.596 "data_offset": 2048, 00:15:26.596 "data_size": 63488 00:15:26.596 }, 00:15:26.596 { 00:15:26.596 "name": "BaseBdev4", 00:15:26.596 "uuid": "a821f3a1-a275-5dce-a902-6d0242c72464", 00:15:26.596 "is_configured": true, 00:15:26.596 "data_offset": 2048, 00:15:26.596 "data_size": 63488 00:15:26.596 } 00:15:26.596 ] 00:15:26.596 }' 00:15:26.596 19:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.855 19:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:26.855 19:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.855 19:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:26.855 19:04:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78690 00:15:26.855 19:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78690 ']' 00:15:26.855 19:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78690 00:15:26.855 19:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:26.855 19:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:26.855 19:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78690 00:15:26.855 killing process with pid 78690 00:15:26.855 Received shutdown signal, test time was about 60.000000 seconds 00:15:26.855 00:15:26.855 Latency(us) 00:15:26.855 [2024-11-26T19:04:53.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.855 [2024-11-26T19:04:53.478Z] =================================================================================================================== 00:15:26.855 [2024-11-26T19:04:53.478Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:26.855 19:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:26.855 19:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:26.855 19:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78690' 00:15:26.855 19:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78690 00:15:26.855 19:04:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78690 00:15:26.855 [2024-11-26 19:04:53.310319] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:26.856 [2024-11-26 19:04:53.310544] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.856 [2024-11-26 19:04:53.310661] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:26.856 [2024-11-26 19:04:53.310680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:27.423 [2024-11-26 19:04:53.800713] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:28.422 ************************************ 00:15:28.422 END TEST raid_rebuild_test_sb 00:15:28.422 ************************************ 00:15:28.422 19:04:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:28.422 00:15:28.422 real 0m30.922s 00:15:28.422 user 0m37.800s 00:15:28.422 sys 0m4.442s 00:15:28.422 19:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.422 19:04:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.422 19:04:55 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:15:28.422 19:04:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:28.422 19:04:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.422 19:04:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:28.422 ************************************ 00:15:28.422 START TEST raid_rebuild_test_io 00:15:28.422 ************************************ 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79501 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79501 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:28.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79501 ']' 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.422 19:04:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.681 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:28.681 Zero copy mechanism will not be used. 00:15:28.681 [2024-11-26 19:04:55.143773] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:15:28.681 [2024-11-26 19:04:55.144018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79501 ] 00:15:28.939 [2024-11-26 19:04:55.347154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.939 [2024-11-26 19:04:55.522817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.199 [2024-11-26 19:04:55.764087] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.199 [2024-11-26 19:04:55.764159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.766 BaseBdev1_malloc 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.766 [2024-11-26 19:04:56.269941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:29.766 [2024-11-26 19:04:56.270265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.766 [2024-11-26 19:04:56.270326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:29.766 [2024-11-26 19:04:56.270349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.766 [2024-11-26 19:04:56.273552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.766 [2024-11-26 19:04:56.273611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:29.766 BaseBdev1 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.766 BaseBdev2_malloc 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.766 [2024-11-26 19:04:56.332412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:29.766 [2024-11-26 19:04:56.332833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.766 [2024-11-26 19:04:56.332902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:29.766 [2024-11-26 19:04:56.332924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.766 [2024-11-26 19:04:56.336205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.766 [2024-11-26 19:04:56.336509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:29.766 BaseBdev2 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.766 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.026 BaseBdev3_malloc 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.026 [2024-11-26 19:04:56.414380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:30.026 [2024-11-26 19:04:56.414511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:30.026 [2024-11-26 19:04:56.414554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:30.026 [2024-11-26 19:04:56.414580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:30.026 [2024-11-26 19:04:56.418005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:30.026 [2024-11-26 19:04:56.418244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:30.026 BaseBdev3 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.026 BaseBdev4_malloc 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.026 [2024-11-26 19:04:56.472048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:30.026 [2024-11-26 19:04:56.472160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:30.026 [2024-11-26 19:04:56.472200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:30.026 [2024-11-26 19:04:56.472220] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:30.026 [2024-11-26 19:04:56.476161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:30.026 [2024-11-26 19:04:56.476495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:30.026 BaseBdev4 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.026 spare_malloc 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.026 spare_delay 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.026 [2024-11-26 19:04:56.543780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:30.026 [2024-11-26 19:04:56.543873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:30.026 [2024-11-26 19:04:56.543923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:30.026 [2024-11-26 19:04:56.543953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:30.026 [2024-11-26 19:04:56.548008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:30.026 [2024-11-26 19:04:56.548080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:30.026 spare 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.026 [2024-11-26 19:04:56.556447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:30.026 [2024-11-26 19:04:56.559880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:30.026 [2024-11-26 19:04:56.560012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:30.026 [2024-11-26 19:04:56.560103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:30.026 [2024-11-26 19:04:56.560258] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:30.026 [2024-11-26 19:04:56.560304] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:30.026 [2024-11-26 19:04:56.560752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:30.026 [2024-11-26 19:04:56.561057] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:30.026 [2024-11-26 19:04:56.561102] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:30.026 [2024-11-26 19:04:56.561470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.026 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.027 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.027 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.027 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.027 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.027 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.027 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.027 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.027 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.027 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.027 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.027 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.027 "name": "raid_bdev1", 00:15:30.027 "uuid": "4ee0d225-f17d-4bc0-be19-8cefc87267ba", 00:15:30.027 "strip_size_kb": 0, 00:15:30.027 "state": "online", 00:15:30.027 "raid_level": "raid1", 00:15:30.027 "superblock": false, 00:15:30.027 "num_base_bdevs": 4, 00:15:30.027 "num_base_bdevs_discovered": 4, 00:15:30.027 "num_base_bdevs_operational": 4, 00:15:30.027 "base_bdevs_list": [ 00:15:30.027 { 00:15:30.027 "name": "BaseBdev1", 00:15:30.027 "uuid": "508ea4c0-a404-5204-88c0-bc37810fef21", 00:15:30.027 "is_configured": true, 00:15:30.027 "data_offset": 0, 00:15:30.027 "data_size": 65536 00:15:30.027 }, 00:15:30.027 { 00:15:30.027 "name": "BaseBdev2", 00:15:30.027 "uuid": "776296db-dc71-5977-9853-eb83da924be7", 00:15:30.027 "is_configured": true, 00:15:30.027 "data_offset": 0, 00:15:30.027 "data_size": 65536 00:15:30.027 }, 00:15:30.027 { 00:15:30.027 "name": "BaseBdev3", 00:15:30.027 "uuid": "255e3ceb-95b1-5ab2-bdc4-0f4371406d6a", 00:15:30.027 "is_configured": true, 00:15:30.027 "data_offset": 0, 00:15:30.027 "data_size": 65536 00:15:30.027 }, 00:15:30.027 { 00:15:30.027 "name": "BaseBdev4", 00:15:30.027 "uuid": "4d653849-de91-54c1-a721-600ece2a6d61", 00:15:30.027 "is_configured": true, 00:15:30.027 "data_offset": 0, 00:15:30.027 "data_size": 65536 00:15:30.027 } 00:15:30.027 ] 00:15:30.027 }' 00:15:30.027 19:04:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.027 19:04:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.593 [2024-11-26 19:04:57.081117] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.593 [2024-11-26 19:04:57.196626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.593 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.851 19:04:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.851 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.851 "name": "raid_bdev1", 00:15:30.851 "uuid": "4ee0d225-f17d-4bc0-be19-8cefc87267ba", 00:15:30.851 "strip_size_kb": 0, 00:15:30.851 "state": "online", 00:15:30.851 "raid_level": "raid1", 00:15:30.851 "superblock": false, 00:15:30.851 "num_base_bdevs": 4, 00:15:30.851 "num_base_bdevs_discovered": 3, 00:15:30.851 "num_base_bdevs_operational": 3, 00:15:30.851 "base_bdevs_list": [ 00:15:30.851 { 00:15:30.851 "name": null, 00:15:30.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.851 "is_configured": false, 00:15:30.851 "data_offset": 0, 00:15:30.851 "data_size": 65536 00:15:30.851 }, 00:15:30.851 { 00:15:30.851 "name": "BaseBdev2", 00:15:30.851 "uuid": "776296db-dc71-5977-9853-eb83da924be7", 00:15:30.851 "is_configured": true, 00:15:30.851 "data_offset": 0, 00:15:30.851 "data_size": 65536 00:15:30.851 }, 00:15:30.851 { 00:15:30.851 "name": "BaseBdev3", 00:15:30.851 "uuid": "255e3ceb-95b1-5ab2-bdc4-0f4371406d6a", 00:15:30.851 "is_configured": true, 00:15:30.851 "data_offset": 0, 00:15:30.851 "data_size": 65536 00:15:30.851 }, 00:15:30.851 { 00:15:30.851 "name": "BaseBdev4", 00:15:30.851 "uuid": "4d653849-de91-54c1-a721-600ece2a6d61", 00:15:30.851 "is_configured": true, 00:15:30.851 "data_offset": 0, 00:15:30.851 "data_size": 65536 00:15:30.851 } 00:15:30.851 ] 00:15:30.851 }' 00:15:30.851 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.851 19:04:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.851 [2024-11-26 19:04:57.322914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:30.851 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:30.851 Zero copy mechanism will not be used. 00:15:30.851 Running I/O for 60 seconds... 00:15:31.109 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:31.109 19:04:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.109 19:04:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:31.109 [2024-11-26 19:04:57.707404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:31.368 19:04:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.368 19:04:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:31.368 [2024-11-26 19:04:57.788918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:31.368 [2024-11-26 19:04:57.791865] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:31.368 [2024-11-26 19:04:57.928610] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:31.368 [2024-11-26 19:04:57.931025] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:31.627 [2024-11-26 19:04:58.147640] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:31.627 [2024-11-26 19:04:58.148827] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:32.143 130.00 IOPS, 390.00 MiB/s [2024-11-26T19:04:58.766Z] [2024-11-26 19:04:58.551813] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:32.143 19:04:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.143 19:04:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.143 19:04:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.143 19:04:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.143 19:04:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.143 19:04:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.143 19:04:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.143 19:04:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.143 19:04:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.428 19:04:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.428 [2024-11-26 19:04:58.805022] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:32.428 [2024-11-26 19:04:58.805548] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:32.428 19:04:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.428 "name": "raid_bdev1", 00:15:32.428 "uuid": "4ee0d225-f17d-4bc0-be19-8cefc87267ba", 00:15:32.428 "strip_size_kb": 0, 00:15:32.428 "state": "online", 00:15:32.428 "raid_level": "raid1", 00:15:32.428 "superblock": false, 00:15:32.428 "num_base_bdevs": 4, 00:15:32.428 "num_base_bdevs_discovered": 4, 00:15:32.428 "num_base_bdevs_operational": 4, 00:15:32.428 "process": { 00:15:32.428 "type": "rebuild", 00:15:32.428 "target": "spare", 00:15:32.428 "progress": { 00:15:32.428 "blocks": 8192, 00:15:32.428 "percent": 12 00:15:32.428 } 00:15:32.428 }, 00:15:32.428 "base_bdevs_list": [ 00:15:32.428 { 00:15:32.428 "name": "spare", 00:15:32.428 "uuid": "34418122-3b16-5dfa-b12b-e92a7bdfaaa7", 00:15:32.428 "is_configured": true, 00:15:32.428 "data_offset": 0, 00:15:32.428 "data_size": 65536 00:15:32.428 }, 00:15:32.428 { 00:15:32.428 "name": "BaseBdev2", 00:15:32.428 "uuid": "776296db-dc71-5977-9853-eb83da924be7", 00:15:32.428 "is_configured": true, 00:15:32.428 "data_offset": 0, 00:15:32.428 "data_size": 65536 00:15:32.428 }, 00:15:32.428 { 00:15:32.428 "name": "BaseBdev3", 00:15:32.428 "uuid": "255e3ceb-95b1-5ab2-bdc4-0f4371406d6a", 00:15:32.428 "is_configured": true, 00:15:32.428 "data_offset": 0, 00:15:32.428 "data_size": 65536 00:15:32.428 }, 00:15:32.428 { 00:15:32.428 "name": "BaseBdev4", 00:15:32.428 "uuid": "4d653849-de91-54c1-a721-600ece2a6d61", 00:15:32.428 "is_configured": true, 00:15:32.428 "data_offset": 0, 00:15:32.428 "data_size": 65536 00:15:32.428 } 00:15:32.428 ] 00:15:32.428 }' 00:15:32.428 19:04:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.428 19:04:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.428 19:04:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.428 19:04:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.428 19:04:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:32.428 19:04:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.428 19:04:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.428 [2024-11-26 19:04:58.923688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:32.720 [2024-11-26 19:04:59.102719] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:32.720 [2024-11-26 19:04:59.124354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.720 [2024-11-26 19:04:59.124487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:32.720 [2024-11-26 19:04:59.124511] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:32.720 [2024-11-26 19:04:59.157653] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:32.720 19:04:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.720 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:32.720 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.720 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.720 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.720 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.720 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.720 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.720 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.720 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.720 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.720 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.720 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.720 19:04:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.720 19:04:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.720 19:04:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.720 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.720 "name": "raid_bdev1", 00:15:32.720 "uuid": "4ee0d225-f17d-4bc0-be19-8cefc87267ba", 00:15:32.720 "strip_size_kb": 0, 00:15:32.720 "state": "online", 00:15:32.720 "raid_level": "raid1", 00:15:32.720 "superblock": false, 00:15:32.720 "num_base_bdevs": 4, 00:15:32.720 "num_base_bdevs_discovered": 3, 00:15:32.720 "num_base_bdevs_operational": 3, 00:15:32.720 "base_bdevs_list": [ 00:15:32.720 { 00:15:32.720 "name": null, 00:15:32.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.720 "is_configured": false, 00:15:32.720 "data_offset": 0, 00:15:32.720 "data_size": 65536 00:15:32.720 }, 00:15:32.720 { 00:15:32.720 "name": "BaseBdev2", 00:15:32.720 "uuid": "776296db-dc71-5977-9853-eb83da924be7", 00:15:32.720 "is_configured": true, 00:15:32.720 "data_offset": 0, 00:15:32.720 "data_size": 65536 00:15:32.720 }, 00:15:32.720 { 00:15:32.720 "name": "BaseBdev3", 00:15:32.720 "uuid": "255e3ceb-95b1-5ab2-bdc4-0f4371406d6a", 00:15:32.720 "is_configured": true, 00:15:32.720 "data_offset": 0, 00:15:32.720 "data_size": 65536 00:15:32.720 }, 00:15:32.720 { 00:15:32.720 "name": "BaseBdev4", 00:15:32.720 "uuid": "4d653849-de91-54c1-a721-600ece2a6d61", 00:15:32.720 "is_configured": true, 00:15:32.720 "data_offset": 0, 00:15:32.720 "data_size": 65536 00:15:32.720 } 00:15:32.720 ] 00:15:32.720 }' 00:15:32.720 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.720 19:04:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.243 118.50 IOPS, 355.50 MiB/s [2024-11-26T19:04:59.866Z] 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:33.243 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.243 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:33.243 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:33.243 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.243 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.243 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.243 19:04:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.243 19:04:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.243 19:04:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.243 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.243 "name": "raid_bdev1", 00:15:33.243 "uuid": "4ee0d225-f17d-4bc0-be19-8cefc87267ba", 00:15:33.243 "strip_size_kb": 0, 00:15:33.243 "state": "online", 00:15:33.243 "raid_level": "raid1", 00:15:33.243 "superblock": false, 00:15:33.243 "num_base_bdevs": 4, 00:15:33.243 "num_base_bdevs_discovered": 3, 00:15:33.243 "num_base_bdevs_operational": 3, 00:15:33.243 "base_bdevs_list": [ 00:15:33.243 { 00:15:33.243 "name": null, 00:15:33.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.243 "is_configured": false, 00:15:33.243 "data_offset": 0, 00:15:33.243 "data_size": 65536 00:15:33.243 }, 00:15:33.243 { 00:15:33.243 "name": "BaseBdev2", 00:15:33.243 "uuid": "776296db-dc71-5977-9853-eb83da924be7", 00:15:33.243 "is_configured": true, 00:15:33.243 "data_offset": 0, 00:15:33.243 "data_size": 65536 00:15:33.243 }, 00:15:33.243 { 00:15:33.243 "name": "BaseBdev3", 00:15:33.243 "uuid": "255e3ceb-95b1-5ab2-bdc4-0f4371406d6a", 00:15:33.243 "is_configured": true, 00:15:33.243 "data_offset": 0, 00:15:33.243 "data_size": 65536 00:15:33.243 }, 00:15:33.243 { 00:15:33.243 "name": "BaseBdev4", 00:15:33.243 "uuid": "4d653849-de91-54c1-a721-600ece2a6d61", 00:15:33.243 "is_configured": true, 00:15:33.243 "data_offset": 0, 00:15:33.243 "data_size": 65536 00:15:33.243 } 00:15:33.243 ] 00:15:33.243 }' 00:15:33.243 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.243 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:33.243 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.243 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:33.243 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:33.243 19:04:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.243 19:04:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.243 [2024-11-26 19:04:59.860505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:33.501 19:04:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.501 19:04:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:33.501 [2024-11-26 19:04:59.929044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:33.501 [2024-11-26 19:04:59.931782] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:33.501 [2024-11-26 19:05:00.053468] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:33.501 [2024-11-26 19:05:00.055119] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:33.758 [2024-11-26 19:05:00.285390] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:33.758 [2024-11-26 19:05:00.286309] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:34.323 125.67 IOPS, 377.00 MiB/s [2024-11-26T19:05:00.946Z] [2024-11-26 19:05:00.725462] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:34.323 19:05:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.323 19:05:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.323 19:05:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.323 19:05:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.323 19:05:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.323 19:05:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.323 19:05:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.323 19:05:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.323 19:05:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.323 19:05:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.581 19:05:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.581 "name": "raid_bdev1", 00:15:34.581 "uuid": "4ee0d225-f17d-4bc0-be19-8cefc87267ba", 00:15:34.581 "strip_size_kb": 0, 00:15:34.581 "state": "online", 00:15:34.581 "raid_level": "raid1", 00:15:34.581 "superblock": false, 00:15:34.581 "num_base_bdevs": 4, 00:15:34.581 "num_base_bdevs_discovered": 4, 00:15:34.581 "num_base_bdevs_operational": 4, 00:15:34.581 "process": { 00:15:34.581 "type": "rebuild", 00:15:34.581 "target": "spare", 00:15:34.581 "progress": { 00:15:34.581 "blocks": 8192, 00:15:34.581 "percent": 12 00:15:34.581 } 00:15:34.581 }, 00:15:34.581 "base_bdevs_list": [ 00:15:34.581 { 00:15:34.581 "name": "spare", 00:15:34.581 "uuid": "34418122-3b16-5dfa-b12b-e92a7bdfaaa7", 00:15:34.581 "is_configured": true, 00:15:34.581 "data_offset": 0, 00:15:34.581 "data_size": 65536 00:15:34.581 }, 00:15:34.581 { 00:15:34.581 "name": "BaseBdev2", 00:15:34.581 "uuid": "776296db-dc71-5977-9853-eb83da924be7", 00:15:34.581 "is_configured": true, 00:15:34.581 "data_offset": 0, 00:15:34.581 "data_size": 65536 00:15:34.581 }, 00:15:34.581 { 00:15:34.581 "name": "BaseBdev3", 00:15:34.581 "uuid": "255e3ceb-95b1-5ab2-bdc4-0f4371406d6a", 00:15:34.582 "is_configured": true, 00:15:34.582 "data_offset": 0, 00:15:34.582 "data_size": 65536 00:15:34.582 }, 00:15:34.582 { 00:15:34.582 "name": "BaseBdev4", 00:15:34.582 "uuid": "4d653849-de91-54c1-a721-600ece2a6d61", 00:15:34.582 "is_configured": true, 00:15:34.582 "data_offset": 0, 00:15:34.582 "data_size": 65536 00:15:34.582 } 00:15:34.582 ] 00:15:34.582 }' 00:15:34.582 19:05:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.582 [2024-11-26 19:05:00.970639] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:34.582 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.582 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.582 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.582 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:34.582 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:34.582 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:34.582 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:34.582 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:34.582 19:05:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.582 19:05:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.582 [2024-11-26 19:05:01.084571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:34.840 [2024-11-26 19:05:01.315139] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:34.840 [2024-11-26 19:05:01.315240] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:34.840 114.50 IOPS, 343.50 MiB/s [2024-11-26T19:05:01.463Z] 19:05:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.840 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:34.840 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:34.840 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.840 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.840 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.840 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.840 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.840 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.840 19:05:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.840 19:05:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.840 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.840 19:05:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.840 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.840 "name": "raid_bdev1", 00:15:34.840 "uuid": "4ee0d225-f17d-4bc0-be19-8cefc87267ba", 00:15:34.840 "strip_size_kb": 0, 00:15:34.840 "state": "online", 00:15:34.840 "raid_level": "raid1", 00:15:34.840 "superblock": false, 00:15:34.840 "num_base_bdevs": 4, 00:15:34.840 "num_base_bdevs_discovered": 3, 00:15:34.840 "num_base_bdevs_operational": 3, 00:15:34.840 "process": { 00:15:34.840 "type": "rebuild", 00:15:34.840 "target": "spare", 00:15:34.840 "progress": { 00:15:34.840 "blocks": 12288, 00:15:34.840 "percent": 18 00:15:34.840 } 00:15:34.840 }, 00:15:34.840 "base_bdevs_list": [ 00:15:34.840 { 00:15:34.840 "name": "spare", 00:15:34.840 "uuid": "34418122-3b16-5dfa-b12b-e92a7bdfaaa7", 00:15:34.840 "is_configured": true, 00:15:34.840 "data_offset": 0, 00:15:34.840 "data_size": 65536 00:15:34.840 }, 00:15:34.840 { 00:15:34.840 "name": null, 00:15:34.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.840 "is_configured": false, 00:15:34.840 "data_offset": 0, 00:15:34.840 "data_size": 65536 00:15:34.840 }, 00:15:34.840 { 00:15:34.840 "name": "BaseBdev3", 00:15:34.840 "uuid": "255e3ceb-95b1-5ab2-bdc4-0f4371406d6a", 00:15:34.840 "is_configured": true, 00:15:34.840 "data_offset": 0, 00:15:34.840 "data_size": 65536 00:15:34.840 }, 00:15:34.840 { 00:15:34.840 "name": "BaseBdev4", 00:15:34.840 "uuid": "4d653849-de91-54c1-a721-600ece2a6d61", 00:15:34.841 "is_configured": true, 00:15:34.841 "data_offset": 0, 00:15:34.841 "data_size": 65536 00:15:34.841 } 00:15:34.841 ] 00:15:34.841 }' 00:15:34.841 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.841 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.841 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.841 [2024-11-26 19:05:01.447145] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:35.099 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.099 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=539 00:15:35.099 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:35.099 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.099 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.099 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.099 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.099 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.099 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.099 19:05:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.099 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.099 19:05:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:35.099 19:05:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.099 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.099 "name": "raid_bdev1", 00:15:35.099 "uuid": "4ee0d225-f17d-4bc0-be19-8cefc87267ba", 00:15:35.099 "strip_size_kb": 0, 00:15:35.099 "state": "online", 00:15:35.099 "raid_level": "raid1", 00:15:35.099 "superblock": false, 00:15:35.099 "num_base_bdevs": 4, 00:15:35.099 "num_base_bdevs_discovered": 3, 00:15:35.099 "num_base_bdevs_operational": 3, 00:15:35.099 "process": { 00:15:35.099 "type": "rebuild", 00:15:35.099 "target": "spare", 00:15:35.099 "progress": { 00:15:35.099 "blocks": 14336, 00:15:35.099 "percent": 21 00:15:35.099 } 00:15:35.099 }, 00:15:35.099 "base_bdevs_list": [ 00:15:35.099 { 00:15:35.099 "name": "spare", 00:15:35.099 "uuid": "34418122-3b16-5dfa-b12b-e92a7bdfaaa7", 00:15:35.099 "is_configured": true, 00:15:35.099 "data_offset": 0, 00:15:35.099 "data_size": 65536 00:15:35.099 }, 00:15:35.099 { 00:15:35.099 "name": null, 00:15:35.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.099 "is_configured": false, 00:15:35.099 "data_offset": 0, 00:15:35.099 "data_size": 65536 00:15:35.099 }, 00:15:35.099 { 00:15:35.099 "name": "BaseBdev3", 00:15:35.099 "uuid": "255e3ceb-95b1-5ab2-bdc4-0f4371406d6a", 00:15:35.099 "is_configured": true, 00:15:35.099 "data_offset": 0, 00:15:35.099 "data_size": 65536 00:15:35.099 }, 00:15:35.099 { 00:15:35.099 "name": "BaseBdev4", 00:15:35.099 "uuid": "4d653849-de91-54c1-a721-600ece2a6d61", 00:15:35.099 "is_configured": true, 00:15:35.099 "data_offset": 0, 00:15:35.099 "data_size": 65536 00:15:35.099 } 00:15:35.099 ] 00:15:35.099 }' 00:15:35.099 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.099 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:35.099 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.099 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.099 19:05:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:35.099 [2024-11-26 19:05:01.668598] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:35.358 [2024-11-26 19:05:01.890748] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:35.616 [2024-11-26 19:05:02.103006] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:36.170 110.40 IOPS, 331.20 MiB/s [2024-11-26T19:05:02.793Z] [2024-11-26 19:05:02.594482] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:36.170 19:05:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:36.170 19:05:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.170 19:05:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.170 19:05:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.170 19:05:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.170 19:05:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.170 19:05:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.170 19:05:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.170 19:05:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.170 19:05:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.170 19:05:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.170 19:05:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.170 "name": "raid_bdev1", 00:15:36.170 "uuid": "4ee0d225-f17d-4bc0-be19-8cefc87267ba", 00:15:36.170 "strip_size_kb": 0, 00:15:36.170 "state": "online", 00:15:36.170 "raid_level": "raid1", 00:15:36.170 "superblock": false, 00:15:36.170 "num_base_bdevs": 4, 00:15:36.170 "num_base_bdevs_discovered": 3, 00:15:36.170 "num_base_bdevs_operational": 3, 00:15:36.170 "process": { 00:15:36.170 "type": "rebuild", 00:15:36.170 "target": "spare", 00:15:36.170 "progress": { 00:15:36.170 "blocks": 28672, 00:15:36.170 "percent": 43 00:15:36.170 } 00:15:36.170 }, 00:15:36.170 "base_bdevs_list": [ 00:15:36.170 { 00:15:36.170 "name": "spare", 00:15:36.170 "uuid": "34418122-3b16-5dfa-b12b-e92a7bdfaaa7", 00:15:36.170 "is_configured": true, 00:15:36.170 "data_offset": 0, 00:15:36.170 "data_size": 65536 00:15:36.170 }, 00:15:36.170 { 00:15:36.170 "name": null, 00:15:36.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.170 "is_configured": false, 00:15:36.170 "data_offset": 0, 00:15:36.170 "data_size": 65536 00:15:36.170 }, 00:15:36.170 { 00:15:36.170 "name": "BaseBdev3", 00:15:36.170 "uuid": "255e3ceb-95b1-5ab2-bdc4-0f4371406d6a", 00:15:36.170 "is_configured": true, 00:15:36.170 "data_offset": 0, 00:15:36.170 "data_size": 65536 00:15:36.170 }, 00:15:36.170 { 00:15:36.170 "name": "BaseBdev4", 00:15:36.170 "uuid": "4d653849-de91-54c1-a721-600ece2a6d61", 00:15:36.170 "is_configured": true, 00:15:36.170 "data_offset": 0, 00:15:36.170 "data_size": 65536 00:15:36.170 } 00:15:36.170 ] 00:15:36.170 }' 00:15:36.170 19:05:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.170 19:05:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.170 19:05:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.483 19:05:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.483 19:05:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:36.483 [2024-11-26 19:05:02.851973] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:37.000 101.00 IOPS, 303.00 MiB/s [2024-11-26T19:05:03.623Z] [2024-11-26 19:05:03.415344] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:37.259 [2024-11-26 19:05:03.738816] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:37.259 19:05:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:37.259 19:05:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.259 19:05:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.259 19:05:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.259 19:05:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.259 19:05:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.259 19:05:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.259 19:05:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.259 19:05:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.259 19:05:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:37.259 19:05:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.259 19:05:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.259 "name": "raid_bdev1", 00:15:37.259 "uuid": "4ee0d225-f17d-4bc0-be19-8cefc87267ba", 00:15:37.259 "strip_size_kb": 0, 00:15:37.259 "state": "online", 00:15:37.259 "raid_level": "raid1", 00:15:37.259 "superblock": false, 00:15:37.259 "num_base_bdevs": 4, 00:15:37.259 "num_base_bdevs_discovered": 3, 00:15:37.259 "num_base_bdevs_operational": 3, 00:15:37.259 "process": { 00:15:37.259 "type": "rebuild", 00:15:37.259 "target": "spare", 00:15:37.259 "progress": { 00:15:37.259 "blocks": 45056, 00:15:37.259 "percent": 68 00:15:37.259 } 00:15:37.259 }, 00:15:37.259 "base_bdevs_list": [ 00:15:37.259 { 00:15:37.259 "name": "spare", 00:15:37.259 "uuid": "34418122-3b16-5dfa-b12b-e92a7bdfaaa7", 00:15:37.259 "is_configured": true, 00:15:37.259 "data_offset": 0, 00:15:37.259 "data_size": 65536 00:15:37.259 }, 00:15:37.259 { 00:15:37.259 "name": null, 00:15:37.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.259 "is_configured": false, 00:15:37.259 "data_offset": 0, 00:15:37.259 "data_size": 65536 00:15:37.259 }, 00:15:37.259 { 00:15:37.259 "name": "BaseBdev3", 00:15:37.259 "uuid": "255e3ceb-95b1-5ab2-bdc4-0f4371406d6a", 00:15:37.259 "is_configured": true, 00:15:37.259 "data_offset": 0, 00:15:37.259 "data_size": 65536 00:15:37.259 }, 00:15:37.259 { 00:15:37.259 "name": "BaseBdev4", 00:15:37.259 "uuid": "4d653849-de91-54c1-a721-600ece2a6d61", 00:15:37.259 "is_configured": true, 00:15:37.259 "data_offset": 0, 00:15:37.259 "data_size": 65536 00:15:37.259 } 00:15:37.259 ] 00:15:37.259 }' 00:15:37.259 19:05:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.517 19:05:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.517 19:05:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.517 19:05:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.517 19:05:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:37.517 [2024-11-26 19:05:04.108896] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:38.711 92.14 IOPS, 276.43 MiB/s [2024-11-26T19:05:05.334Z] 19:05:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:38.711 19:05:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.711 19:05:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.711 19:05:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.711 19:05:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.711 19:05:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.711 19:05:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.711 19:05:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.711 19:05:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.711 19:05:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:38.711 [2024-11-26 19:05:04.993187] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:38.711 19:05:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.711 19:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.711 "name": "raid_bdev1", 00:15:38.711 "uuid": "4ee0d225-f17d-4bc0-be19-8cefc87267ba", 00:15:38.711 "strip_size_kb": 0, 00:15:38.711 "state": "online", 00:15:38.711 "raid_level": "raid1", 00:15:38.711 "superblock": false, 00:15:38.711 "num_base_bdevs": 4, 00:15:38.711 "num_base_bdevs_discovered": 3, 00:15:38.711 "num_base_bdevs_operational": 3, 00:15:38.711 "process": { 00:15:38.711 "type": "rebuild", 00:15:38.711 "target": "spare", 00:15:38.711 "progress": { 00:15:38.711 "blocks": 63488, 00:15:38.711 "percent": 96 00:15:38.711 } 00:15:38.711 }, 00:15:38.711 "base_bdevs_list": [ 00:15:38.711 { 00:15:38.711 "name": "spare", 00:15:38.711 "uuid": "34418122-3b16-5dfa-b12b-e92a7bdfaaa7", 00:15:38.711 "is_configured": true, 00:15:38.711 "data_offset": 0, 00:15:38.711 "data_size": 65536 00:15:38.711 }, 00:15:38.711 { 00:15:38.711 "name": null, 00:15:38.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.711 "is_configured": false, 00:15:38.711 "data_offset": 0, 00:15:38.711 "data_size": 65536 00:15:38.711 }, 00:15:38.711 { 00:15:38.711 "name": "BaseBdev3", 00:15:38.711 "uuid": "255e3ceb-95b1-5ab2-bdc4-0f4371406d6a", 00:15:38.711 "is_configured": true, 00:15:38.711 "data_offset": 0, 00:15:38.711 "data_size": 65536 00:15:38.711 }, 00:15:38.711 { 00:15:38.711 "name": "BaseBdev4", 00:15:38.711 "uuid": "4d653849-de91-54c1-a721-600ece2a6d61", 00:15:38.711 "is_configured": true, 00:15:38.711 "data_offset": 0, 00:15:38.711 "data_size": 65536 00:15:38.711 } 00:15:38.711 ] 00:15:38.711 }' 00:15:38.711 19:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.711 19:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.711 19:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.711 [2024-11-26 19:05:05.093197] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:38.711 [2024-11-26 19:05:05.096524] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.711 19:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.711 19:05:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:39.537 84.00 IOPS, 252.00 MiB/s [2024-11-26T19:05:06.160Z] 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.537 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.537 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.537 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.537 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.537 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.537 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.537 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.537 19:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.537 19:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.537 19:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.796 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.796 "name": "raid_bdev1", 00:15:39.796 "uuid": "4ee0d225-f17d-4bc0-be19-8cefc87267ba", 00:15:39.796 "strip_size_kb": 0, 00:15:39.796 "state": "online", 00:15:39.796 "raid_level": "raid1", 00:15:39.796 "superblock": false, 00:15:39.796 "num_base_bdevs": 4, 00:15:39.796 "num_base_bdevs_discovered": 3, 00:15:39.796 "num_base_bdevs_operational": 3, 00:15:39.796 "base_bdevs_list": [ 00:15:39.796 { 00:15:39.796 "name": "spare", 00:15:39.796 "uuid": "34418122-3b16-5dfa-b12b-e92a7bdfaaa7", 00:15:39.796 "is_configured": true, 00:15:39.796 "data_offset": 0, 00:15:39.796 "data_size": 65536 00:15:39.796 }, 00:15:39.796 { 00:15:39.796 "name": null, 00:15:39.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.796 "is_configured": false, 00:15:39.796 "data_offset": 0, 00:15:39.796 "data_size": 65536 00:15:39.796 }, 00:15:39.796 { 00:15:39.796 "name": "BaseBdev3", 00:15:39.796 "uuid": "255e3ceb-95b1-5ab2-bdc4-0f4371406d6a", 00:15:39.796 "is_configured": true, 00:15:39.796 "data_offset": 0, 00:15:39.796 "data_size": 65536 00:15:39.796 }, 00:15:39.796 { 00:15:39.796 "name": "BaseBdev4", 00:15:39.796 "uuid": "4d653849-de91-54c1-a721-600ece2a6d61", 00:15:39.796 "is_configured": true, 00:15:39.796 "data_offset": 0, 00:15:39.796 "data_size": 65536 00:15:39.796 } 00:15:39.796 ] 00:15:39.796 }' 00:15:39.796 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.796 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:39.796 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.796 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:39.796 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:39.796 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:39.796 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.796 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:39.796 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:39.796 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.796 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.796 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.796 19:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.796 19:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.796 19:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.796 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.796 "name": "raid_bdev1", 00:15:39.797 "uuid": "4ee0d225-f17d-4bc0-be19-8cefc87267ba", 00:15:39.797 "strip_size_kb": 0, 00:15:39.797 "state": "online", 00:15:39.797 "raid_level": "raid1", 00:15:39.797 "superblock": false, 00:15:39.797 "num_base_bdevs": 4, 00:15:39.797 "num_base_bdevs_discovered": 3, 00:15:39.797 "num_base_bdevs_operational": 3, 00:15:39.797 "base_bdevs_list": [ 00:15:39.797 { 00:15:39.797 "name": "spare", 00:15:39.797 "uuid": "34418122-3b16-5dfa-b12b-e92a7bdfaaa7", 00:15:39.797 "is_configured": true, 00:15:39.797 "data_offset": 0, 00:15:39.797 "data_size": 65536 00:15:39.797 }, 00:15:39.797 { 00:15:39.797 "name": null, 00:15:39.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.797 "is_configured": false, 00:15:39.797 "data_offset": 0, 00:15:39.797 "data_size": 65536 00:15:39.797 }, 00:15:39.797 { 00:15:39.797 "name": "BaseBdev3", 00:15:39.797 "uuid": "255e3ceb-95b1-5ab2-bdc4-0f4371406d6a", 00:15:39.797 "is_configured": true, 00:15:39.797 "data_offset": 0, 00:15:39.797 "data_size": 65536 00:15:39.797 }, 00:15:39.797 { 00:15:39.797 "name": "BaseBdev4", 00:15:39.797 "uuid": "4d653849-de91-54c1-a721-600ece2a6d61", 00:15:39.797 "is_configured": true, 00:15:39.797 "data_offset": 0, 00:15:39.797 "data_size": 65536 00:15:39.797 } 00:15:39.797 ] 00:15:39.797 }' 00:15:39.797 77.33 IOPS, 232.00 MiB/s [2024-11-26T19:05:06.420Z] 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.797 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:39.797 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.055 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:40.055 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:40.056 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.056 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.056 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:40.056 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:40.056 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.056 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.056 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.056 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.056 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.056 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.056 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.056 19:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.056 19:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.056 19:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.056 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.056 "name": "raid_bdev1", 00:15:40.056 "uuid": "4ee0d225-f17d-4bc0-be19-8cefc87267ba", 00:15:40.056 "strip_size_kb": 0, 00:15:40.056 "state": "online", 00:15:40.056 "raid_level": "raid1", 00:15:40.056 "superblock": false, 00:15:40.056 "num_base_bdevs": 4, 00:15:40.056 "num_base_bdevs_discovered": 3, 00:15:40.056 "num_base_bdevs_operational": 3, 00:15:40.056 "base_bdevs_list": [ 00:15:40.056 { 00:15:40.056 "name": "spare", 00:15:40.056 "uuid": "34418122-3b16-5dfa-b12b-e92a7bdfaaa7", 00:15:40.056 "is_configured": true, 00:15:40.056 "data_offset": 0, 00:15:40.056 "data_size": 65536 00:15:40.056 }, 00:15:40.056 { 00:15:40.056 "name": null, 00:15:40.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.056 "is_configured": false, 00:15:40.056 "data_offset": 0, 00:15:40.056 "data_size": 65536 00:15:40.056 }, 00:15:40.056 { 00:15:40.056 "name": "BaseBdev3", 00:15:40.056 "uuid": "255e3ceb-95b1-5ab2-bdc4-0f4371406d6a", 00:15:40.056 "is_configured": true, 00:15:40.056 "data_offset": 0, 00:15:40.056 "data_size": 65536 00:15:40.056 }, 00:15:40.056 { 00:15:40.056 "name": "BaseBdev4", 00:15:40.056 "uuid": "4d653849-de91-54c1-a721-600ece2a6d61", 00:15:40.056 "is_configured": true, 00:15:40.056 "data_offset": 0, 00:15:40.056 "data_size": 65536 00:15:40.056 } 00:15:40.056 ] 00:15:40.056 }' 00:15:40.056 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.056 19:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.315 19:05:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:40.315 19:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.315 19:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.315 [2024-11-26 19:05:06.923208] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:40.315 [2024-11-26 19:05:06.923297] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:40.574 00:15:40.574 Latency(us) 00:15:40.574 [2024-11-26T19:05:07.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.574 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:40.574 raid_bdev1 : 9.65 74.10 222.31 0.00 0.00 20335.33 288.58 123922.62 00:15:40.574 [2024-11-26T19:05:07.197Z] =================================================================================================================== 00:15:40.574 [2024-11-26T19:05:07.197Z] Total : 74.10 222.31 0.00 0.00 20335.33 288.58 123922.62 00:15:40.574 [2024-11-26 19:05:06.995228] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:40.574 [2024-11-26 19:05:06.995376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.574 [2024-11-26 19:05:06.995523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:40.574 [2024-11-26 19:05:06.995554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:40.574 { 00:15:40.574 "results": [ 00:15:40.574 { 00:15:40.574 "job": "raid_bdev1", 00:15:40.574 "core_mask": "0x1", 00:15:40.574 "workload": "randrw", 00:15:40.574 "percentage": 50, 00:15:40.574 "status": "finished", 00:15:40.574 "queue_depth": 2, 00:15:40.574 "io_size": 3145728, 00:15:40.574 "runtime": 9.648899, 00:15:40.574 "iops": 74.10171875568393, 00:15:40.574 "mibps": 222.3051562670518, 00:15:40.574 "io_failed": 0, 00:15:40.574 "io_timeout": 0, 00:15:40.574 "avg_latency_us": 20335.334123331213, 00:15:40.574 "min_latency_us": 288.58181818181816, 00:15:40.574 "max_latency_us": 123922.61818181818 00:15:40.574 } 00:15:40.574 ], 00:15:40.574 "core_count": 1 00:15:40.574 } 00:15:40.574 19:05:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.574 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.574 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:40.574 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.574 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.574 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.574 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:40.574 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:40.574 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:40.574 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:40.574 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.574 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:40.574 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:40.574 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:40.574 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:40.574 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:40.574 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:40.574 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:40.574 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:40.832 /dev/nbd0 00:15:40.832 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:40.832 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:40.832 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:40.832 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:40.832 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:40.832 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:40.832 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:40.832 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:40.832 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:40.832 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:40.832 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.832 1+0 records in 00:15:40.832 1+0 records out 00:15:40.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241668 s, 16.9 MB/s 00:15:40.833 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.833 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:40.833 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.833 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:40.833 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:40.833 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.833 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:40.833 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:40.833 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:40.833 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:40.833 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:40.833 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:40.833 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:40.833 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.833 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:40.833 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:40.833 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:40.833 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:40.833 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:40.833 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:40.833 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:40.833 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:41.096 /dev/nbd1 00:15:41.096 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:41.096 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:41.096 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:41.096 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:41.096 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:41.096 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:41.096 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:41.096 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:41.096 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:41.096 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:41.096 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.096 1+0 records in 00:15:41.096 1+0 records out 00:15:41.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279933 s, 14.6 MB/s 00:15:41.096 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.096 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:41.096 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.096 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:41.096 19:05:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:41.096 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.096 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:41.096 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:41.360 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:41.360 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:41.360 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:41.360 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:41.360 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:41.360 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:41.360 19:05:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:41.618 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:41.618 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:41.618 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:41.618 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.618 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.618 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:41.618 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:41.618 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.618 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:41.618 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:41.618 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:41.618 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:41.618 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:41.618 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:41.618 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:41.618 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:41.618 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:41.618 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:41.618 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:41.618 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:41.877 /dev/nbd1 00:15:41.877 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:41.877 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:41.877 19:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:41.877 19:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:41.877 19:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:41.877 19:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:41.877 19:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:41.877 19:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:41.877 19:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:41.877 19:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:41.877 19:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.877 1+0 records in 00:15:41.877 1+0 records out 00:15:41.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339329 s, 12.1 MB/s 00:15:41.877 19:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.877 19:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:41.877 19:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.877 19:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:41.877 19:05:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:41.877 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.877 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:41.877 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:42.137 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:42.137 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:42.137 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:42.137 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:42.137 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:42.137 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:42.137 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:42.395 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:42.395 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:42.395 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:42.395 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:42.395 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:42.395 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:42.395 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:42.395 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:42.395 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:42.395 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:42.395 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:42.395 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:42.395 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:42.395 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:42.395 19:05:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:42.658 19:05:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:42.658 19:05:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:42.658 19:05:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:42.658 19:05:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:42.658 19:05:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:42.658 19:05:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:42.658 19:05:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:42.658 19:05:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:42.658 19:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:42.658 19:05:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79501 00:15:42.658 19:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79501 ']' 00:15:42.658 19:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79501 00:15:42.658 19:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:15:42.658 19:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.658 19:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79501 00:15:42.658 19:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.658 19:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.658 19:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79501' 00:15:42.658 killing process with pid 79501 00:15:42.658 19:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79501 00:15:42.658 Received shutdown signal, test time was about 11.833896 seconds 00:15:42.658 00:15:42.658 Latency(us) 00:15:42.658 [2024-11-26T19:05:09.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.659 [2024-11-26T19:05:09.282Z] =================================================================================================================== 00:15:42.659 [2024-11-26T19:05:09.282Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:42.659 19:05:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79501 00:15:42.659 [2024-11-26 19:05:09.160049] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:43.228 [2024-11-26 19:05:09.541322] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:44.164 00:15:44.164 real 0m15.651s 00:15:44.164 user 0m20.091s 00:15:44.164 sys 0m1.903s 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.164 ************************************ 00:15:44.164 END TEST raid_rebuild_test_io 00:15:44.164 ************************************ 00:15:44.164 19:05:10 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:15:44.164 19:05:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:44.164 19:05:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.164 19:05:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:44.164 ************************************ 00:15:44.164 START TEST raid_rebuild_test_sb_io 00:15:44.164 ************************************ 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:44.164 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:44.165 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79959 00:15:44.165 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:44.165 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79959 00:15:44.165 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79959 ']' 00:15:44.165 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.165 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.165 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.165 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.165 19:05:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.423 [2024-11-26 19:05:10.857058] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:15:44.423 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:44.423 Zero copy mechanism will not be used. 00:15:44.423 [2024-11-26 19:05:10.857221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79959 ] 00:15:44.423 [2024-11-26 19:05:11.039863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.681 [2024-11-26 19:05:11.168281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.939 [2024-11-26 19:05:11.371016] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.939 [2024-11-26 19:05:11.371093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.197 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.197 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:15:45.197 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:45.197 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:45.197 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.197 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.456 BaseBdev1_malloc 00:15:45.456 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.457 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:45.457 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.457 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.457 [2024-11-26 19:05:11.863731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:45.457 [2024-11-26 19:05:11.863814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.457 [2024-11-26 19:05:11.863848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:45.457 [2024-11-26 19:05:11.863868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.457 [2024-11-26 19:05:11.866796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.457 [2024-11-26 19:05:11.866849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:45.457 BaseBdev1 00:15:45.457 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.457 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:45.457 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:45.457 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.457 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.457 BaseBdev2_malloc 00:15:45.457 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.457 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:45.457 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.457 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.457 [2024-11-26 19:05:11.924146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:45.457 [2024-11-26 19:05:11.924232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.457 [2024-11-26 19:05:11.924269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:45.457 [2024-11-26 19:05:11.924310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.457 [2024-11-26 19:05:11.927266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.457 [2024-11-26 19:05:11.927318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:45.457 BaseBdev2 00:15:45.457 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.457 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:45.457 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:45.457 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.457 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.457 BaseBdev3_malloc 00:15:45.457 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.457 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:45.457 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.457 19:05:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.457 [2024-11-26 19:05:12.000797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:45.457 [2024-11-26 19:05:12.000889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.457 [2024-11-26 19:05:12.000926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:45.457 [2024-11-26 19:05:12.000945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.457 [2024-11-26 19:05:12.003892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.457 [2024-11-26 19:05:12.003940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:45.457 BaseBdev3 00:15:45.457 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.457 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:45.457 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:45.457 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.457 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.457 BaseBdev4_malloc 00:15:45.457 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.457 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:45.457 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.457 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.457 [2024-11-26 19:05:12.061264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:45.457 [2024-11-26 19:05:12.061363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.457 [2024-11-26 19:05:12.061399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:45.457 [2024-11-26 19:05:12.061417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.457 [2024-11-26 19:05:12.064378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.457 [2024-11-26 19:05:12.064427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:45.457 BaseBdev4 00:15:45.457 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.457 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:45.457 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.457 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.715 spare_malloc 00:15:45.715 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.715 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:45.715 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.715 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.715 spare_delay 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.716 [2024-11-26 19:05:12.133470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:45.716 [2024-11-26 19:05:12.133548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.716 [2024-11-26 19:05:12.133578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:45.716 [2024-11-26 19:05:12.133596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.716 [2024-11-26 19:05:12.136521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.716 [2024-11-26 19:05:12.136570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:45.716 spare 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.716 [2024-11-26 19:05:12.145581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.716 [2024-11-26 19:05:12.148101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:45.716 [2024-11-26 19:05:12.148196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:45.716 [2024-11-26 19:05:12.148294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:45.716 [2024-11-26 19:05:12.148555] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:45.716 [2024-11-26 19:05:12.148590] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:45.716 [2024-11-26 19:05:12.148951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:45.716 [2024-11-26 19:05:12.149209] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:45.716 [2024-11-26 19:05:12.149233] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:45.716 [2024-11-26 19:05:12.149500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.716 "name": "raid_bdev1", 00:15:45.716 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:15:45.716 "strip_size_kb": 0, 00:15:45.716 "state": "online", 00:15:45.716 "raid_level": "raid1", 00:15:45.716 "superblock": true, 00:15:45.716 "num_base_bdevs": 4, 00:15:45.716 "num_base_bdevs_discovered": 4, 00:15:45.716 "num_base_bdevs_operational": 4, 00:15:45.716 "base_bdevs_list": [ 00:15:45.716 { 00:15:45.716 "name": "BaseBdev1", 00:15:45.716 "uuid": "fef498a6-5dc5-5aa2-bef4-f1e063582bb6", 00:15:45.716 "is_configured": true, 00:15:45.716 "data_offset": 2048, 00:15:45.716 "data_size": 63488 00:15:45.716 }, 00:15:45.716 { 00:15:45.716 "name": "BaseBdev2", 00:15:45.716 "uuid": "b24698c3-f3c9-5a4c-bc27-9bca5c7061a6", 00:15:45.716 "is_configured": true, 00:15:45.716 "data_offset": 2048, 00:15:45.716 "data_size": 63488 00:15:45.716 }, 00:15:45.716 { 00:15:45.716 "name": "BaseBdev3", 00:15:45.716 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:15:45.716 "is_configured": true, 00:15:45.716 "data_offset": 2048, 00:15:45.716 "data_size": 63488 00:15:45.716 }, 00:15:45.716 { 00:15:45.716 "name": "BaseBdev4", 00:15:45.716 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:15:45.716 "is_configured": true, 00:15:45.716 "data_offset": 2048, 00:15:45.716 "data_size": 63488 00:15:45.716 } 00:15:45.716 ] 00:15:45.716 }' 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.716 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.283 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:46.283 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:46.283 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.283 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.283 [2024-11-26 19:05:12.662155] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.283 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.283 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:46.283 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.283 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.283 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:46.283 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.284 [2024-11-26 19:05:12.753694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.284 "name": "raid_bdev1", 00:15:46.284 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:15:46.284 "strip_size_kb": 0, 00:15:46.284 "state": "online", 00:15:46.284 "raid_level": "raid1", 00:15:46.284 "superblock": true, 00:15:46.284 "num_base_bdevs": 4, 00:15:46.284 "num_base_bdevs_discovered": 3, 00:15:46.284 "num_base_bdevs_operational": 3, 00:15:46.284 "base_bdevs_list": [ 00:15:46.284 { 00:15:46.284 "name": null, 00:15:46.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.284 "is_configured": false, 00:15:46.284 "data_offset": 0, 00:15:46.284 "data_size": 63488 00:15:46.284 }, 00:15:46.284 { 00:15:46.284 "name": "BaseBdev2", 00:15:46.284 "uuid": "b24698c3-f3c9-5a4c-bc27-9bca5c7061a6", 00:15:46.284 "is_configured": true, 00:15:46.284 "data_offset": 2048, 00:15:46.284 "data_size": 63488 00:15:46.284 }, 00:15:46.284 { 00:15:46.284 "name": "BaseBdev3", 00:15:46.284 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:15:46.284 "is_configured": true, 00:15:46.284 "data_offset": 2048, 00:15:46.284 "data_size": 63488 00:15:46.284 }, 00:15:46.284 { 00:15:46.284 "name": "BaseBdev4", 00:15:46.284 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:15:46.284 "is_configured": true, 00:15:46.284 "data_offset": 2048, 00:15:46.284 "data_size": 63488 00:15:46.284 } 00:15:46.284 ] 00:15:46.284 }' 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.284 19:05:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.284 [2024-11-26 19:05:12.866826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:46.284 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:46.284 Zero copy mechanism will not be used. 00:15:46.284 Running I/O for 60 seconds... 00:15:46.850 19:05:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:46.851 19:05:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.851 19:05:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.851 [2024-11-26 19:05:13.304562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:46.851 19:05:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.851 19:05:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:46.851 [2024-11-26 19:05:13.399871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:46.851 [2024-11-26 19:05:13.402691] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:47.109 [2024-11-26 19:05:13.532594] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:47.109 [2024-11-26 19:05:13.534970] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:47.366 [2024-11-26 19:05:13.773855] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:47.366 [2024-11-26 19:05:13.775010] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:47.931 99.00 IOPS, 297.00 MiB/s [2024-11-26T19:05:14.554Z] [2024-11-26 19:05:14.323742] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:47.931 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.931 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.931 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.931 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.931 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.931 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.931 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.931 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.931 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:47.931 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.931 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.931 "name": "raid_bdev1", 00:15:47.931 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:15:47.931 "strip_size_kb": 0, 00:15:47.931 "state": "online", 00:15:47.931 "raid_level": "raid1", 00:15:47.931 "superblock": true, 00:15:47.931 "num_base_bdevs": 4, 00:15:47.931 "num_base_bdevs_discovered": 4, 00:15:47.931 "num_base_bdevs_operational": 4, 00:15:47.931 "process": { 00:15:47.931 "type": "rebuild", 00:15:47.931 "target": "spare", 00:15:47.931 "progress": { 00:15:47.931 "blocks": 10240, 00:15:47.931 "percent": 16 00:15:47.931 } 00:15:47.931 }, 00:15:47.931 "base_bdevs_list": [ 00:15:47.931 { 00:15:47.931 "name": "spare", 00:15:47.931 "uuid": "76ad9961-f9d8-5a3f-b87b-ca1ea53db5fb", 00:15:47.931 "is_configured": true, 00:15:47.931 "data_offset": 2048, 00:15:47.931 "data_size": 63488 00:15:47.931 }, 00:15:47.931 { 00:15:47.931 "name": "BaseBdev2", 00:15:47.932 "uuid": "b24698c3-f3c9-5a4c-bc27-9bca5c7061a6", 00:15:47.932 "is_configured": true, 00:15:47.932 "data_offset": 2048, 00:15:47.932 "data_size": 63488 00:15:47.932 }, 00:15:47.932 { 00:15:47.932 "name": "BaseBdev3", 00:15:47.932 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:15:47.932 "is_configured": true, 00:15:47.932 "data_offset": 2048, 00:15:47.932 "data_size": 63488 00:15:47.932 }, 00:15:47.932 { 00:15:47.932 "name": "BaseBdev4", 00:15:47.932 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:15:47.932 "is_configured": true, 00:15:47.932 "data_offset": 2048, 00:15:47.932 "data_size": 63488 00:15:47.932 } 00:15:47.932 ] 00:15:47.932 }' 00:15:47.932 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.932 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.932 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.932 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.932 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:47.932 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.932 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:47.932 [2024-11-26 19:05:14.526060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:48.190 [2024-11-26 19:05:14.625952] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:48.190 [2024-11-26 19:05:14.729768] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:48.190 [2024-11-26 19:05:14.735348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.190 [2024-11-26 19:05:14.735410] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:48.190 [2024-11-26 19:05:14.735430] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:48.190 [2024-11-26 19:05:14.772227] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:48.190 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.190 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:48.190 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.190 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.190 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.190 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.190 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.190 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.190 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.190 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.190 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.190 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.190 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.190 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.190 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.448 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.448 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.448 "name": "raid_bdev1", 00:15:48.448 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:15:48.448 "strip_size_kb": 0, 00:15:48.448 "state": "online", 00:15:48.448 "raid_level": "raid1", 00:15:48.448 "superblock": true, 00:15:48.448 "num_base_bdevs": 4, 00:15:48.448 "num_base_bdevs_discovered": 3, 00:15:48.448 "num_base_bdevs_operational": 3, 00:15:48.448 "base_bdevs_list": [ 00:15:48.448 { 00:15:48.448 "name": null, 00:15:48.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.448 "is_configured": false, 00:15:48.448 "data_offset": 0, 00:15:48.448 "data_size": 63488 00:15:48.448 }, 00:15:48.448 { 00:15:48.448 "name": "BaseBdev2", 00:15:48.448 "uuid": "b24698c3-f3c9-5a4c-bc27-9bca5c7061a6", 00:15:48.448 "is_configured": true, 00:15:48.448 "data_offset": 2048, 00:15:48.448 "data_size": 63488 00:15:48.448 }, 00:15:48.448 { 00:15:48.448 "name": "BaseBdev3", 00:15:48.448 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:15:48.448 "is_configured": true, 00:15:48.448 "data_offset": 2048, 00:15:48.448 "data_size": 63488 00:15:48.448 }, 00:15:48.448 { 00:15:48.448 "name": "BaseBdev4", 00:15:48.448 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:15:48.448 "is_configured": true, 00:15:48.448 "data_offset": 2048, 00:15:48.448 "data_size": 63488 00:15:48.448 } 00:15:48.448 ] 00:15:48.448 }' 00:15:48.448 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.448 19:05:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.706 97.00 IOPS, 291.00 MiB/s [2024-11-26T19:05:15.329Z] 19:05:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:48.706 19:05:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.706 19:05:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:48.706 19:05:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:48.706 19:05:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.706 19:05:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.706 19:05:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.706 19:05:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.706 19:05:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.706 19:05:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.706 19:05:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.706 "name": "raid_bdev1", 00:15:48.706 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:15:48.706 "strip_size_kb": 0, 00:15:48.706 "state": "online", 00:15:48.706 "raid_level": "raid1", 00:15:48.706 "superblock": true, 00:15:48.706 "num_base_bdevs": 4, 00:15:48.706 "num_base_bdevs_discovered": 3, 00:15:48.706 "num_base_bdevs_operational": 3, 00:15:48.706 "base_bdevs_list": [ 00:15:48.706 { 00:15:48.706 "name": null, 00:15:48.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.706 "is_configured": false, 00:15:48.706 "data_offset": 0, 00:15:48.706 "data_size": 63488 00:15:48.706 }, 00:15:48.706 { 00:15:48.706 "name": "BaseBdev2", 00:15:48.706 "uuid": "b24698c3-f3c9-5a4c-bc27-9bca5c7061a6", 00:15:48.706 "is_configured": true, 00:15:48.706 "data_offset": 2048, 00:15:48.706 "data_size": 63488 00:15:48.706 }, 00:15:48.706 { 00:15:48.706 "name": "BaseBdev3", 00:15:48.706 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:15:48.706 "is_configured": true, 00:15:48.706 "data_offset": 2048, 00:15:48.706 "data_size": 63488 00:15:48.706 }, 00:15:48.706 { 00:15:48.706 "name": "BaseBdev4", 00:15:48.706 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:15:48.706 "is_configured": true, 00:15:48.706 "data_offset": 2048, 00:15:48.706 "data_size": 63488 00:15:48.706 } 00:15:48.706 ] 00:15:48.706 }' 00:15:48.706 19:05:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.964 19:05:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:48.964 19:05:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.965 19:05:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:48.965 19:05:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:48.965 19:05:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.965 19:05:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.965 [2024-11-26 19:05:15.428212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:48.965 19:05:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.965 19:05:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:48.965 [2024-11-26 19:05:15.490596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:48.965 [2024-11-26 19:05:15.493349] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:49.222 [2024-11-26 19:05:15.628848] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:49.481 [2024-11-26 19:05:15.878236] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:49.481 [2024-11-26 19:05:15.879375] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:49.739 114.00 IOPS, 342.00 MiB/s [2024-11-26T19:05:16.362Z] [2024-11-26 19:05:16.278634] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:49.997 [2024-11-26 19:05:16.424346] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:49.997 [2024-11-26 19:05:16.424893] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:49.997 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.997 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.997 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.997 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.997 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.997 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.997 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.997 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.997 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.997 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.997 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.997 "name": "raid_bdev1", 00:15:49.997 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:15:49.997 "strip_size_kb": 0, 00:15:49.997 "state": "online", 00:15:49.997 "raid_level": "raid1", 00:15:49.997 "superblock": true, 00:15:49.997 "num_base_bdevs": 4, 00:15:49.997 "num_base_bdevs_discovered": 4, 00:15:49.997 "num_base_bdevs_operational": 4, 00:15:49.997 "process": { 00:15:49.997 "type": "rebuild", 00:15:49.997 "target": "spare", 00:15:49.997 "progress": { 00:15:49.997 "blocks": 10240, 00:15:49.997 "percent": 16 00:15:49.997 } 00:15:49.997 }, 00:15:49.997 "base_bdevs_list": [ 00:15:49.997 { 00:15:49.997 "name": "spare", 00:15:49.997 "uuid": "76ad9961-f9d8-5a3f-b87b-ca1ea53db5fb", 00:15:49.997 "is_configured": true, 00:15:49.997 "data_offset": 2048, 00:15:49.997 "data_size": 63488 00:15:49.997 }, 00:15:49.997 { 00:15:49.997 "name": "BaseBdev2", 00:15:49.997 "uuid": "b24698c3-f3c9-5a4c-bc27-9bca5c7061a6", 00:15:49.997 "is_configured": true, 00:15:49.997 "data_offset": 2048, 00:15:49.997 "data_size": 63488 00:15:49.997 }, 00:15:49.997 { 00:15:49.997 "name": "BaseBdev3", 00:15:49.997 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:15:49.997 "is_configured": true, 00:15:49.997 "data_offset": 2048, 00:15:49.997 "data_size": 63488 00:15:49.997 }, 00:15:49.997 { 00:15:49.997 "name": "BaseBdev4", 00:15:49.997 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:15:49.997 "is_configured": true, 00:15:49.997 "data_offset": 2048, 00:15:49.997 "data_size": 63488 00:15:49.997 } 00:15:49.997 ] 00:15:49.997 }' 00:15:49.997 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.997 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.997 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.256 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.257 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:50.257 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:50.257 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:50.257 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:50.257 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:50.257 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:50.257 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:50.257 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.257 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.257 [2024-11-26 19:05:16.641486] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:50.257 [2024-11-26 19:05:16.858017] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:50.257 [2024-11-26 19:05:16.858090] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:50.257 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.257 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:50.257 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:50.257 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.257 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.257 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.257 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.257 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.515 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.515 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.515 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.515 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.515 102.75 IOPS, 308.25 MiB/s [2024-11-26T19:05:17.138Z] 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.515 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.515 "name": "raid_bdev1", 00:15:50.515 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:15:50.515 "strip_size_kb": 0, 00:15:50.515 "state": "online", 00:15:50.515 "raid_level": "raid1", 00:15:50.515 "superblock": true, 00:15:50.515 "num_base_bdevs": 4, 00:15:50.515 "num_base_bdevs_discovered": 3, 00:15:50.515 "num_base_bdevs_operational": 3, 00:15:50.515 "process": { 00:15:50.515 "type": "rebuild", 00:15:50.515 "target": "spare", 00:15:50.515 "progress": { 00:15:50.515 "blocks": 12288, 00:15:50.515 "percent": 19 00:15:50.515 } 00:15:50.515 }, 00:15:50.515 "base_bdevs_list": [ 00:15:50.515 { 00:15:50.515 "name": "spare", 00:15:50.515 "uuid": "76ad9961-f9d8-5a3f-b87b-ca1ea53db5fb", 00:15:50.515 "is_configured": true, 00:15:50.515 "data_offset": 2048, 00:15:50.515 "data_size": 63488 00:15:50.515 }, 00:15:50.515 { 00:15:50.515 "name": null, 00:15:50.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.515 "is_configured": false, 00:15:50.515 "data_offset": 0, 00:15:50.515 "data_size": 63488 00:15:50.515 }, 00:15:50.515 { 00:15:50.515 "name": "BaseBdev3", 00:15:50.515 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:15:50.515 "is_configured": true, 00:15:50.515 "data_offset": 2048, 00:15:50.515 "data_size": 63488 00:15:50.515 }, 00:15:50.515 { 00:15:50.515 "name": "BaseBdev4", 00:15:50.515 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:15:50.515 "is_configured": true, 00:15:50.515 "data_offset": 2048, 00:15:50.515 "data_size": 63488 00:15:50.515 } 00:15:50.515 ] 00:15:50.515 }' 00:15:50.515 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.515 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:50.515 19:05:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.515 [2024-11-26 19:05:17.006344] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:50.515 19:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.515 19:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=555 00:15:50.515 19:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:50.515 19:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.515 19:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.515 19:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.515 19:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.515 19:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.516 19:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.516 19:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.516 19:05:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.516 19:05:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.516 19:05:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.516 19:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.516 "name": "raid_bdev1", 00:15:50.516 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:15:50.516 "strip_size_kb": 0, 00:15:50.516 "state": "online", 00:15:50.516 "raid_level": "raid1", 00:15:50.516 "superblock": true, 00:15:50.516 "num_base_bdevs": 4, 00:15:50.516 "num_base_bdevs_discovered": 3, 00:15:50.516 "num_base_bdevs_operational": 3, 00:15:50.516 "process": { 00:15:50.516 "type": "rebuild", 00:15:50.516 "target": "spare", 00:15:50.516 "progress": { 00:15:50.516 "blocks": 14336, 00:15:50.516 "percent": 22 00:15:50.516 } 00:15:50.516 }, 00:15:50.516 "base_bdevs_list": [ 00:15:50.516 { 00:15:50.516 "name": "spare", 00:15:50.516 "uuid": "76ad9961-f9d8-5a3f-b87b-ca1ea53db5fb", 00:15:50.516 "is_configured": true, 00:15:50.516 "data_offset": 2048, 00:15:50.516 "data_size": 63488 00:15:50.516 }, 00:15:50.516 { 00:15:50.516 "name": null, 00:15:50.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.516 "is_configured": false, 00:15:50.516 "data_offset": 0, 00:15:50.516 "data_size": 63488 00:15:50.516 }, 00:15:50.516 { 00:15:50.516 "name": "BaseBdev3", 00:15:50.516 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:15:50.516 "is_configured": true, 00:15:50.516 "data_offset": 2048, 00:15:50.516 "data_size": 63488 00:15:50.516 }, 00:15:50.516 { 00:15:50.516 "name": "BaseBdev4", 00:15:50.516 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:15:50.516 "is_configured": true, 00:15:50.516 "data_offset": 2048, 00:15:50.516 "data_size": 63488 00:15:50.516 } 00:15:50.516 ] 00:15:50.516 }' 00:15:50.516 19:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.516 [2024-11-26 19:05:17.134414] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:50.516 19:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:50.773 19:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.773 19:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.773 19:05:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:51.031 [2024-11-26 19:05:17.527456] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:51.290 [2024-11-26 19:05:17.866497] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:51.857 98.60 IOPS, 295.80 MiB/s [2024-11-26T19:05:18.480Z] 19:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:51.857 19:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:51.857 19:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.857 19:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:51.857 19:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:51.858 19:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.858 19:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.858 19:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.858 19:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.858 19:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.858 19:05:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.858 19:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.858 "name": "raid_bdev1", 00:15:51.858 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:15:51.858 "strip_size_kb": 0, 00:15:51.858 "state": "online", 00:15:51.858 "raid_level": "raid1", 00:15:51.858 "superblock": true, 00:15:51.858 "num_base_bdevs": 4, 00:15:51.858 "num_base_bdevs_discovered": 3, 00:15:51.858 "num_base_bdevs_operational": 3, 00:15:51.858 "process": { 00:15:51.858 "type": "rebuild", 00:15:51.858 "target": "spare", 00:15:51.858 "progress": { 00:15:51.858 "blocks": 30720, 00:15:51.858 "percent": 48 00:15:51.858 } 00:15:51.858 }, 00:15:51.858 "base_bdevs_list": [ 00:15:51.858 { 00:15:51.858 "name": "spare", 00:15:51.858 "uuid": "76ad9961-f9d8-5a3f-b87b-ca1ea53db5fb", 00:15:51.858 "is_configured": true, 00:15:51.858 "data_offset": 2048, 00:15:51.858 "data_size": 63488 00:15:51.858 }, 00:15:51.858 { 00:15:51.858 "name": null, 00:15:51.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.858 "is_configured": false, 00:15:51.858 "data_offset": 0, 00:15:51.858 "data_size": 63488 00:15:51.858 }, 00:15:51.858 { 00:15:51.858 "name": "BaseBdev3", 00:15:51.858 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:15:51.858 "is_configured": true, 00:15:51.858 "data_offset": 2048, 00:15:51.858 "data_size": 63488 00:15:51.858 }, 00:15:51.858 { 00:15:51.858 "name": "BaseBdev4", 00:15:51.858 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:15:51.858 "is_configured": true, 00:15:51.858 "data_offset": 2048, 00:15:51.858 "data_size": 63488 00:15:51.858 } 00:15:51.858 ] 00:15:51.858 }' 00:15:51.858 19:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.858 19:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:51.858 19:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.858 [2024-11-26 19:05:18.324714] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:51.858 [2024-11-26 19:05:18.325610] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:51.858 19:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:51.858 19:05:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:52.424 [2024-11-26 19:05:18.755391] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:52.424 88.83 IOPS, 266.50 MiB/s [2024-11-26T19:05:19.047Z] [2024-11-26 19:05:18.993197] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:52.999 19:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:52.999 19:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.999 19:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.999 19:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.999 19:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.000 19:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.000 19:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.000 19:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.000 19:05:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.000 19:05:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:53.000 19:05:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.000 19:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.000 "name": "raid_bdev1", 00:15:53.000 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:15:53.000 "strip_size_kb": 0, 00:15:53.000 "state": "online", 00:15:53.000 "raid_level": "raid1", 00:15:53.000 "superblock": true, 00:15:53.000 "num_base_bdevs": 4, 00:15:53.000 "num_base_bdevs_discovered": 3, 00:15:53.000 "num_base_bdevs_operational": 3, 00:15:53.000 "process": { 00:15:53.000 "type": "rebuild", 00:15:53.000 "target": "spare", 00:15:53.000 "progress": { 00:15:53.000 "blocks": 51200, 00:15:53.000 "percent": 80 00:15:53.000 } 00:15:53.000 }, 00:15:53.000 "base_bdevs_list": [ 00:15:53.000 { 00:15:53.000 "name": "spare", 00:15:53.000 "uuid": "76ad9961-f9d8-5a3f-b87b-ca1ea53db5fb", 00:15:53.000 "is_configured": true, 00:15:53.000 "data_offset": 2048, 00:15:53.000 "data_size": 63488 00:15:53.000 }, 00:15:53.000 { 00:15:53.000 "name": null, 00:15:53.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.000 "is_configured": false, 00:15:53.000 "data_offset": 0, 00:15:53.000 "data_size": 63488 00:15:53.000 }, 00:15:53.000 { 00:15:53.000 "name": "BaseBdev3", 00:15:53.000 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:15:53.000 "is_configured": true, 00:15:53.000 "data_offset": 2048, 00:15:53.000 "data_size": 63488 00:15:53.000 }, 00:15:53.000 { 00:15:53.000 "name": "BaseBdev4", 00:15:53.000 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:15:53.000 "is_configured": true, 00:15:53.000 "data_offset": 2048, 00:15:53.000 "data_size": 63488 00:15:53.000 } 00:15:53.000 ] 00:15:53.000 }' 00:15:53.000 19:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.000 19:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:53.000 19:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.000 19:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.000 19:05:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:53.000 [2024-11-26 19:05:19.612461] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:15:53.259 [2024-11-26 19:05:19.818227] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:53.518 81.29 IOPS, 243.86 MiB/s [2024-11-26T19:05:20.141Z] [2024-11-26 19:05:20.053965] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:53.776 [2024-11-26 19:05:20.153946] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:53.776 [2024-11-26 19:05:20.158648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.035 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:54.035 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.035 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.035 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.035 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.035 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.035 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.035 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.035 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.035 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.035 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.035 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.035 "name": "raid_bdev1", 00:15:54.035 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:15:54.035 "strip_size_kb": 0, 00:15:54.035 "state": "online", 00:15:54.035 "raid_level": "raid1", 00:15:54.035 "superblock": true, 00:15:54.035 "num_base_bdevs": 4, 00:15:54.035 "num_base_bdevs_discovered": 3, 00:15:54.035 "num_base_bdevs_operational": 3, 00:15:54.035 "base_bdevs_list": [ 00:15:54.035 { 00:15:54.035 "name": "spare", 00:15:54.035 "uuid": "76ad9961-f9d8-5a3f-b87b-ca1ea53db5fb", 00:15:54.035 "is_configured": true, 00:15:54.035 "data_offset": 2048, 00:15:54.035 "data_size": 63488 00:15:54.035 }, 00:15:54.035 { 00:15:54.035 "name": null, 00:15:54.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.035 "is_configured": false, 00:15:54.035 "data_offset": 0, 00:15:54.035 "data_size": 63488 00:15:54.035 }, 00:15:54.035 { 00:15:54.035 "name": "BaseBdev3", 00:15:54.035 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:15:54.035 "is_configured": true, 00:15:54.035 "data_offset": 2048, 00:15:54.035 "data_size": 63488 00:15:54.035 }, 00:15:54.035 { 00:15:54.035 "name": "BaseBdev4", 00:15:54.035 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:15:54.035 "is_configured": true, 00:15:54.035 "data_offset": 2048, 00:15:54.035 "data_size": 63488 00:15:54.035 } 00:15:54.035 ] 00:15:54.035 }' 00:15:54.035 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.035 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:54.035 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.294 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:54.294 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:54.294 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:54.294 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.294 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:54.294 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:54.294 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.294 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.294 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.294 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.294 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.294 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.294 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.294 "name": "raid_bdev1", 00:15:54.294 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:15:54.294 "strip_size_kb": 0, 00:15:54.294 "state": "online", 00:15:54.294 "raid_level": "raid1", 00:15:54.294 "superblock": true, 00:15:54.294 "num_base_bdevs": 4, 00:15:54.294 "num_base_bdevs_discovered": 3, 00:15:54.294 "num_base_bdevs_operational": 3, 00:15:54.294 "base_bdevs_list": [ 00:15:54.294 { 00:15:54.294 "name": "spare", 00:15:54.294 "uuid": "76ad9961-f9d8-5a3f-b87b-ca1ea53db5fb", 00:15:54.294 "is_configured": true, 00:15:54.294 "data_offset": 2048, 00:15:54.294 "data_size": 63488 00:15:54.294 }, 00:15:54.294 { 00:15:54.294 "name": null, 00:15:54.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.294 "is_configured": false, 00:15:54.294 "data_offset": 0, 00:15:54.294 "data_size": 63488 00:15:54.294 }, 00:15:54.294 { 00:15:54.294 "name": "BaseBdev3", 00:15:54.294 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:15:54.294 "is_configured": true, 00:15:54.294 "data_offset": 2048, 00:15:54.294 "data_size": 63488 00:15:54.294 }, 00:15:54.294 { 00:15:54.294 "name": "BaseBdev4", 00:15:54.294 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:15:54.294 "is_configured": true, 00:15:54.294 "data_offset": 2048, 00:15:54.294 "data_size": 63488 00:15:54.294 } 00:15:54.294 ] 00:15:54.294 }' 00:15:54.294 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.294 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:54.294 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.294 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:54.294 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:54.294 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.294 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.295 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.295 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.295 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.295 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.295 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.295 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.295 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.295 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.295 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.295 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.295 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.295 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.553 75.00 IOPS, 225.00 MiB/s [2024-11-26T19:05:21.176Z] 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.553 "name": "raid_bdev1", 00:15:54.553 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:15:54.553 "strip_size_kb": 0, 00:15:54.553 "state": "online", 00:15:54.553 "raid_level": "raid1", 00:15:54.553 "superblock": true, 00:15:54.553 "num_base_bdevs": 4, 00:15:54.553 "num_base_bdevs_discovered": 3, 00:15:54.553 "num_base_bdevs_operational": 3, 00:15:54.553 "base_bdevs_list": [ 00:15:54.553 { 00:15:54.553 "name": "spare", 00:15:54.553 "uuid": "76ad9961-f9d8-5a3f-b87b-ca1ea53db5fb", 00:15:54.553 "is_configured": true, 00:15:54.553 "data_offset": 2048, 00:15:54.553 "data_size": 63488 00:15:54.553 }, 00:15:54.553 { 00:15:54.553 "name": null, 00:15:54.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.553 "is_configured": false, 00:15:54.553 "data_offset": 0, 00:15:54.553 "data_size": 63488 00:15:54.553 }, 00:15:54.553 { 00:15:54.553 "name": "BaseBdev3", 00:15:54.553 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:15:54.553 "is_configured": true, 00:15:54.553 "data_offset": 2048, 00:15:54.553 "data_size": 63488 00:15:54.553 }, 00:15:54.553 { 00:15:54.553 "name": "BaseBdev4", 00:15:54.553 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:15:54.553 "is_configured": true, 00:15:54.553 "data_offset": 2048, 00:15:54.553 "data_size": 63488 00:15:54.553 } 00:15:54.553 ] 00:15:54.553 }' 00:15:54.553 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.553 19:05:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.812 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:54.812 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.812 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.812 [2024-11-26 19:05:21.380803] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:54.812 [2024-11-26 19:05:21.380873] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:55.070 00:15:55.070 Latency(us) 00:15:55.070 [2024-11-26T19:05:21.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.070 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:55.070 raid_bdev1 : 8.62 71.56 214.67 0.00 0.00 19161.91 281.13 117726.49 00:15:55.070 [2024-11-26T19:05:21.693Z] =================================================================================================================== 00:15:55.070 [2024-11-26T19:05:21.693Z] Total : 71.56 214.67 0.00 0.00 19161.91 281.13 117726.49 00:15:55.071 [2024-11-26 19:05:21.513162] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.071 [2024-11-26 19:05:21.513336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.071 { 00:15:55.071 "results": [ 00:15:55.071 { 00:15:55.071 "job": "raid_bdev1", 00:15:55.071 "core_mask": "0x1", 00:15:55.071 "workload": "randrw", 00:15:55.071 "percentage": 50, 00:15:55.071 "status": "finished", 00:15:55.071 "queue_depth": 2, 00:15:55.071 "io_size": 3145728, 00:15:55.071 "runtime": 8.62256, 00:15:55.071 "iops": 71.55647510716075, 00:15:55.071 "mibps": 214.66942532148227, 00:15:55.071 "io_failed": 0, 00:15:55.071 "io_timeout": 0, 00:15:55.071 "avg_latency_us": 19161.906656843967, 00:15:55.071 "min_latency_us": 281.13454545454545, 00:15:55.071 "max_latency_us": 117726.48727272727 00:15:55.071 } 00:15:55.071 ], 00:15:55.071 "core_count": 1 00:15:55.071 } 00:15:55.071 [2024-11-26 19:05:21.513496] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.071 [2024-11-26 19:05:21.513518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:55.071 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.071 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.071 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.071 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:55.071 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.071 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.071 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:55.071 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:55.071 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:55.071 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:55.071 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:55.071 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:55.071 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:55.071 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:55.071 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:55.071 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:55.071 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:55.071 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:55.071 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:55.329 /dev/nbd0 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:55.588 1+0 records in 00:15:55.588 1+0 records out 00:15:55.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535917 s, 7.6 MB/s 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:55.588 19:05:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:55.846 /dev/nbd1 00:15:55.846 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:55.846 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:55.846 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:55.846 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:55.846 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:55.846 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:55.846 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:55.846 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:55.846 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:55.846 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:55.846 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:55.846 1+0 records in 00:15:55.846 1+0 records out 00:15:55.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414432 s, 9.9 MB/s 00:15:55.846 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.846 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:55.846 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.847 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:55.847 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:55.847 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:55.847 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:55.847 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:56.105 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:56.105 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:56.105 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:56.106 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:56.106 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:56.106 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:56.106 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:56.364 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:56.364 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:56.364 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:56.365 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:56.365 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:56.365 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:56.365 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:56.365 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:56.365 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:56.365 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:56.365 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:56.365 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:56.365 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:56.365 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:56.365 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:56.365 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:56.365 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:56.365 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:56.365 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:56.365 19:05:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:56.623 /dev/nbd1 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:56.623 1+0 records in 00:15:56.623 1+0 records out 00:15:56.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034561 s, 11.9 MB/s 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:56.623 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:57.189 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:57.189 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:57.189 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:57.189 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:57.189 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:57.189 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:57.189 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:57.189 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:57.189 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:57.189 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:57.189 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:57.189 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:57.190 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:57.190 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:57.190 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:57.448 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:57.448 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:57.448 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:57.448 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:57.448 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:57.448 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:57.448 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:57.448 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:57.448 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:57.448 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:57.448 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.448 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.448 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.448 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:57.448 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.448 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.448 [2024-11-26 19:05:23.933803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:57.448 [2024-11-26 19:05:23.933886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.448 [2024-11-26 19:05:23.933920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:57.448 [2024-11-26 19:05:23.933939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.448 [2024-11-26 19:05:23.937119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.448 [2024-11-26 19:05:23.937180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:57.448 [2024-11-26 19:05:23.937338] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:57.448 [2024-11-26 19:05:23.937425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:57.448 [2024-11-26 19:05:23.937615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:57.448 [2024-11-26 19:05:23.937769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:57.448 spare 00:15:57.448 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.448 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:57.448 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.449 19:05:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.449 [2024-11-26 19:05:24.037959] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:57.449 [2024-11-26 19:05:24.038045] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:57.449 [2024-11-26 19:05:24.038571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:15:57.449 [2024-11-26 19:05:24.038901] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:57.449 [2024-11-26 19:05:24.038919] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:57.449 [2024-11-26 19:05:24.039187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.449 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.449 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:57.449 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.449 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.449 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.449 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.449 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.449 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.449 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.449 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.449 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.449 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.449 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.449 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.449 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.449 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.707 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.707 "name": "raid_bdev1", 00:15:57.707 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:15:57.707 "strip_size_kb": 0, 00:15:57.707 "state": "online", 00:15:57.707 "raid_level": "raid1", 00:15:57.707 "superblock": true, 00:15:57.707 "num_base_bdevs": 4, 00:15:57.707 "num_base_bdevs_discovered": 3, 00:15:57.707 "num_base_bdevs_operational": 3, 00:15:57.707 "base_bdevs_list": [ 00:15:57.707 { 00:15:57.707 "name": "spare", 00:15:57.707 "uuid": "76ad9961-f9d8-5a3f-b87b-ca1ea53db5fb", 00:15:57.707 "is_configured": true, 00:15:57.707 "data_offset": 2048, 00:15:57.707 "data_size": 63488 00:15:57.707 }, 00:15:57.707 { 00:15:57.707 "name": null, 00:15:57.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.707 "is_configured": false, 00:15:57.707 "data_offset": 2048, 00:15:57.707 "data_size": 63488 00:15:57.707 }, 00:15:57.707 { 00:15:57.707 "name": "BaseBdev3", 00:15:57.707 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:15:57.707 "is_configured": true, 00:15:57.707 "data_offset": 2048, 00:15:57.707 "data_size": 63488 00:15:57.707 }, 00:15:57.707 { 00:15:57.707 "name": "BaseBdev4", 00:15:57.707 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:15:57.707 "is_configured": true, 00:15:57.707 "data_offset": 2048, 00:15:57.707 "data_size": 63488 00:15:57.707 } 00:15:57.707 ] 00:15:57.707 }' 00:15:57.707 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.707 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.966 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:57.966 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.966 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:57.966 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:57.966 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.966 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.966 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.966 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.966 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.966 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.224 "name": "raid_bdev1", 00:15:58.224 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:15:58.224 "strip_size_kb": 0, 00:15:58.224 "state": "online", 00:15:58.224 "raid_level": "raid1", 00:15:58.224 "superblock": true, 00:15:58.224 "num_base_bdevs": 4, 00:15:58.224 "num_base_bdevs_discovered": 3, 00:15:58.224 "num_base_bdevs_operational": 3, 00:15:58.224 "base_bdevs_list": [ 00:15:58.224 { 00:15:58.224 "name": "spare", 00:15:58.224 "uuid": "76ad9961-f9d8-5a3f-b87b-ca1ea53db5fb", 00:15:58.224 "is_configured": true, 00:15:58.224 "data_offset": 2048, 00:15:58.224 "data_size": 63488 00:15:58.224 }, 00:15:58.224 { 00:15:58.224 "name": null, 00:15:58.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.224 "is_configured": false, 00:15:58.224 "data_offset": 2048, 00:15:58.224 "data_size": 63488 00:15:58.224 }, 00:15:58.224 { 00:15:58.224 "name": "BaseBdev3", 00:15:58.224 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:15:58.224 "is_configured": true, 00:15:58.224 "data_offset": 2048, 00:15:58.224 "data_size": 63488 00:15:58.224 }, 00:15:58.224 { 00:15:58.224 "name": "BaseBdev4", 00:15:58.224 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:15:58.224 "is_configured": true, 00:15:58.224 "data_offset": 2048, 00:15:58.224 "data_size": 63488 00:15:58.224 } 00:15:58.224 ] 00:15:58.224 }' 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.224 [2024-11-26 19:05:24.750454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.224 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.224 "name": "raid_bdev1", 00:15:58.225 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:15:58.225 "strip_size_kb": 0, 00:15:58.225 "state": "online", 00:15:58.225 "raid_level": "raid1", 00:15:58.225 "superblock": true, 00:15:58.225 "num_base_bdevs": 4, 00:15:58.225 "num_base_bdevs_discovered": 2, 00:15:58.225 "num_base_bdevs_operational": 2, 00:15:58.225 "base_bdevs_list": [ 00:15:58.225 { 00:15:58.225 "name": null, 00:15:58.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.225 "is_configured": false, 00:15:58.225 "data_offset": 0, 00:15:58.225 "data_size": 63488 00:15:58.225 }, 00:15:58.225 { 00:15:58.225 "name": null, 00:15:58.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.225 "is_configured": false, 00:15:58.225 "data_offset": 2048, 00:15:58.225 "data_size": 63488 00:15:58.225 }, 00:15:58.225 { 00:15:58.225 "name": "BaseBdev3", 00:15:58.225 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:15:58.225 "is_configured": true, 00:15:58.225 "data_offset": 2048, 00:15:58.225 "data_size": 63488 00:15:58.225 }, 00:15:58.225 { 00:15:58.225 "name": "BaseBdev4", 00:15:58.225 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:15:58.225 "is_configured": true, 00:15:58.225 "data_offset": 2048, 00:15:58.225 "data_size": 63488 00:15:58.225 } 00:15:58.225 ] 00:15:58.225 }' 00:15:58.225 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.225 19:05:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.790 19:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:58.790 19:05:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.790 19:05:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.790 [2024-11-26 19:05:25.262671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:58.790 [2024-11-26 19:05:25.262973] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:58.791 [2024-11-26 19:05:25.263002] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:58.791 [2024-11-26 19:05:25.263058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:58.791 [2024-11-26 19:05:25.278013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:15:58.791 19:05:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.791 19:05:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:58.791 [2024-11-26 19:05:25.281114] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:59.727 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.727 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.727 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.727 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.727 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.727 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.727 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.727 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.727 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.727 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.727 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.727 "name": "raid_bdev1", 00:15:59.727 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:15:59.727 "strip_size_kb": 0, 00:15:59.727 "state": "online", 00:15:59.727 "raid_level": "raid1", 00:15:59.727 "superblock": true, 00:15:59.727 "num_base_bdevs": 4, 00:15:59.727 "num_base_bdevs_discovered": 3, 00:15:59.727 "num_base_bdevs_operational": 3, 00:15:59.727 "process": { 00:15:59.727 "type": "rebuild", 00:15:59.727 "target": "spare", 00:15:59.727 "progress": { 00:15:59.727 "blocks": 18432, 00:15:59.727 "percent": 29 00:15:59.727 } 00:15:59.727 }, 00:15:59.727 "base_bdevs_list": [ 00:15:59.727 { 00:15:59.727 "name": "spare", 00:15:59.727 "uuid": "76ad9961-f9d8-5a3f-b87b-ca1ea53db5fb", 00:15:59.727 "is_configured": true, 00:15:59.727 "data_offset": 2048, 00:15:59.727 "data_size": 63488 00:15:59.727 }, 00:15:59.727 { 00:15:59.727 "name": null, 00:15:59.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.727 "is_configured": false, 00:15:59.727 "data_offset": 2048, 00:15:59.727 "data_size": 63488 00:15:59.727 }, 00:15:59.727 { 00:15:59.727 "name": "BaseBdev3", 00:15:59.727 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:15:59.727 "is_configured": true, 00:15:59.727 "data_offset": 2048, 00:15:59.727 "data_size": 63488 00:15:59.727 }, 00:15:59.727 { 00:15:59.727 "name": "BaseBdev4", 00:15:59.727 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:15:59.727 "is_configured": true, 00:15:59.727 "data_offset": 2048, 00:15:59.727 "data_size": 63488 00:15:59.727 } 00:15:59.727 ] 00:15:59.727 }' 00:15:59.727 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.987 [2024-11-26 19:05:26.459665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:59.987 [2024-11-26 19:05:26.494096] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:59.987 [2024-11-26 19:05:26.494234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.987 [2024-11-26 19:05:26.494261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:59.987 [2024-11-26 19:05:26.494277] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.987 "name": "raid_bdev1", 00:15:59.987 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:15:59.987 "strip_size_kb": 0, 00:15:59.987 "state": "online", 00:15:59.987 "raid_level": "raid1", 00:15:59.987 "superblock": true, 00:15:59.987 "num_base_bdevs": 4, 00:15:59.987 "num_base_bdevs_discovered": 2, 00:15:59.987 "num_base_bdevs_operational": 2, 00:15:59.987 "base_bdevs_list": [ 00:15:59.987 { 00:15:59.987 "name": null, 00:15:59.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.987 "is_configured": false, 00:15:59.987 "data_offset": 0, 00:15:59.987 "data_size": 63488 00:15:59.987 }, 00:15:59.987 { 00:15:59.987 "name": null, 00:15:59.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.987 "is_configured": false, 00:15:59.987 "data_offset": 2048, 00:15:59.987 "data_size": 63488 00:15:59.987 }, 00:15:59.987 { 00:15:59.987 "name": "BaseBdev3", 00:15:59.987 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:15:59.987 "is_configured": true, 00:15:59.987 "data_offset": 2048, 00:15:59.987 "data_size": 63488 00:15:59.987 }, 00:15:59.987 { 00:15:59.987 "name": "BaseBdev4", 00:15:59.987 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:15:59.987 "is_configured": true, 00:15:59.987 "data_offset": 2048, 00:15:59.987 "data_size": 63488 00:15:59.987 } 00:15:59.987 ] 00:15:59.987 }' 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.987 19:05:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.555 19:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:00.555 19:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.555 19:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.555 [2024-11-26 19:05:27.027926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:00.555 [2024-11-26 19:05:27.028062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.555 [2024-11-26 19:05:27.028139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:00.555 [2024-11-26 19:05:27.028169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.555 [2024-11-26 19:05:27.029227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.555 [2024-11-26 19:05:27.029319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:00.555 [2024-11-26 19:05:27.029527] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:00.555 [2024-11-26 19:05:27.029573] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:00.555 [2024-11-26 19:05:27.029598] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:00.555 [2024-11-26 19:05:27.029667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:00.555 [2024-11-26 19:05:27.051864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:16:00.555 spare 00:16:00.555 19:05:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.555 19:05:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:00.555 [2024-11-26 19:05:27.055553] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:01.490 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.490 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.490 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.490 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.490 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.490 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.490 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.490 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.490 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.490 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.490 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.490 "name": "raid_bdev1", 00:16:01.490 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:16:01.490 "strip_size_kb": 0, 00:16:01.490 "state": "online", 00:16:01.490 "raid_level": "raid1", 00:16:01.490 "superblock": true, 00:16:01.490 "num_base_bdevs": 4, 00:16:01.490 "num_base_bdevs_discovered": 3, 00:16:01.490 "num_base_bdevs_operational": 3, 00:16:01.490 "process": { 00:16:01.490 "type": "rebuild", 00:16:01.490 "target": "spare", 00:16:01.490 "progress": { 00:16:01.490 "blocks": 18432, 00:16:01.490 "percent": 29 00:16:01.490 } 00:16:01.490 }, 00:16:01.490 "base_bdevs_list": [ 00:16:01.490 { 00:16:01.490 "name": "spare", 00:16:01.490 "uuid": "76ad9961-f9d8-5a3f-b87b-ca1ea53db5fb", 00:16:01.490 "is_configured": true, 00:16:01.490 "data_offset": 2048, 00:16:01.490 "data_size": 63488 00:16:01.490 }, 00:16:01.490 { 00:16:01.490 "name": null, 00:16:01.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.490 "is_configured": false, 00:16:01.490 "data_offset": 2048, 00:16:01.490 "data_size": 63488 00:16:01.490 }, 00:16:01.490 { 00:16:01.490 "name": "BaseBdev3", 00:16:01.490 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:16:01.490 "is_configured": true, 00:16:01.490 "data_offset": 2048, 00:16:01.490 "data_size": 63488 00:16:01.490 }, 00:16:01.490 { 00:16:01.490 "name": "BaseBdev4", 00:16:01.490 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:16:01.490 "is_configured": true, 00:16:01.490 "data_offset": 2048, 00:16:01.490 "data_size": 63488 00:16:01.490 } 00:16:01.490 ] 00:16:01.490 }' 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.748 [2024-11-26 19:05:28.202028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:01.748 [2024-11-26 19:05:28.268136] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:01.748 [2024-11-26 19:05:28.268263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.748 [2024-11-26 19:05:28.268314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:01.748 [2024-11-26 19:05:28.268329] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.748 "name": "raid_bdev1", 00:16:01.748 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:16:01.748 "strip_size_kb": 0, 00:16:01.748 "state": "online", 00:16:01.748 "raid_level": "raid1", 00:16:01.748 "superblock": true, 00:16:01.748 "num_base_bdevs": 4, 00:16:01.748 "num_base_bdevs_discovered": 2, 00:16:01.748 "num_base_bdevs_operational": 2, 00:16:01.748 "base_bdevs_list": [ 00:16:01.748 { 00:16:01.748 "name": null, 00:16:01.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.748 "is_configured": false, 00:16:01.748 "data_offset": 0, 00:16:01.748 "data_size": 63488 00:16:01.748 }, 00:16:01.748 { 00:16:01.748 "name": null, 00:16:01.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.748 "is_configured": false, 00:16:01.748 "data_offset": 2048, 00:16:01.748 "data_size": 63488 00:16:01.748 }, 00:16:01.748 { 00:16:01.748 "name": "BaseBdev3", 00:16:01.748 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:16:01.748 "is_configured": true, 00:16:01.748 "data_offset": 2048, 00:16:01.748 "data_size": 63488 00:16:01.748 }, 00:16:01.748 { 00:16:01.748 "name": "BaseBdev4", 00:16:01.748 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:16:01.748 "is_configured": true, 00:16:01.748 "data_offset": 2048, 00:16:01.748 "data_size": 63488 00:16:01.748 } 00:16:01.748 ] 00:16:01.748 }' 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.748 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.314 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:02.314 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.314 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:02.314 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:02.314 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.314 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.314 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.314 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.314 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.314 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.314 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.314 "name": "raid_bdev1", 00:16:02.314 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:16:02.314 "strip_size_kb": 0, 00:16:02.314 "state": "online", 00:16:02.314 "raid_level": "raid1", 00:16:02.314 "superblock": true, 00:16:02.314 "num_base_bdevs": 4, 00:16:02.314 "num_base_bdevs_discovered": 2, 00:16:02.314 "num_base_bdevs_operational": 2, 00:16:02.314 "base_bdevs_list": [ 00:16:02.314 { 00:16:02.314 "name": null, 00:16:02.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.314 "is_configured": false, 00:16:02.314 "data_offset": 0, 00:16:02.314 "data_size": 63488 00:16:02.314 }, 00:16:02.314 { 00:16:02.314 "name": null, 00:16:02.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.314 "is_configured": false, 00:16:02.314 "data_offset": 2048, 00:16:02.314 "data_size": 63488 00:16:02.314 }, 00:16:02.314 { 00:16:02.314 "name": "BaseBdev3", 00:16:02.314 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:16:02.314 "is_configured": true, 00:16:02.314 "data_offset": 2048, 00:16:02.314 "data_size": 63488 00:16:02.314 }, 00:16:02.314 { 00:16:02.314 "name": "BaseBdev4", 00:16:02.314 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:16:02.314 "is_configured": true, 00:16:02.314 "data_offset": 2048, 00:16:02.314 "data_size": 63488 00:16:02.314 } 00:16:02.314 ] 00:16:02.314 }' 00:16:02.314 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.314 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:02.314 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.573 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:02.573 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:02.573 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.573 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.573 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.573 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:02.573 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.573 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.573 [2024-11-26 19:05:28.970184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:02.573 [2024-11-26 19:05:28.970267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.573 [2024-11-26 19:05:28.970321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:16:02.573 [2024-11-26 19:05:28.970339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.573 [2024-11-26 19:05:28.971084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.573 [2024-11-26 19:05:28.971127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:02.573 [2024-11-26 19:05:28.971252] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:02.573 [2024-11-26 19:05:28.971276] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:02.573 [2024-11-26 19:05:28.971312] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:02.573 [2024-11-26 19:05:28.971326] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:02.573 BaseBdev1 00:16:02.573 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.573 19:05:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:03.507 19:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:03.507 19:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.507 19:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.507 19:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.507 19:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.507 19:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:03.507 19:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.507 19:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.507 19:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.507 19:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.507 19:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.507 19:05:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.507 19:05:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.507 19:05:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.507 19:05:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.507 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.507 "name": "raid_bdev1", 00:16:03.507 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:16:03.507 "strip_size_kb": 0, 00:16:03.507 "state": "online", 00:16:03.507 "raid_level": "raid1", 00:16:03.507 "superblock": true, 00:16:03.507 "num_base_bdevs": 4, 00:16:03.507 "num_base_bdevs_discovered": 2, 00:16:03.507 "num_base_bdevs_operational": 2, 00:16:03.507 "base_bdevs_list": [ 00:16:03.507 { 00:16:03.507 "name": null, 00:16:03.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.507 "is_configured": false, 00:16:03.507 "data_offset": 0, 00:16:03.507 "data_size": 63488 00:16:03.507 }, 00:16:03.507 { 00:16:03.507 "name": null, 00:16:03.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.507 "is_configured": false, 00:16:03.507 "data_offset": 2048, 00:16:03.507 "data_size": 63488 00:16:03.507 }, 00:16:03.507 { 00:16:03.507 "name": "BaseBdev3", 00:16:03.507 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:16:03.507 "is_configured": true, 00:16:03.507 "data_offset": 2048, 00:16:03.507 "data_size": 63488 00:16:03.507 }, 00:16:03.507 { 00:16:03.507 "name": "BaseBdev4", 00:16:03.507 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:16:03.507 "is_configured": true, 00:16:03.507 "data_offset": 2048, 00:16:03.507 "data_size": 63488 00:16:03.507 } 00:16:03.507 ] 00:16:03.507 }' 00:16:03.507 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.507 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.076 "name": "raid_bdev1", 00:16:04.076 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:16:04.076 "strip_size_kb": 0, 00:16:04.076 "state": "online", 00:16:04.076 "raid_level": "raid1", 00:16:04.076 "superblock": true, 00:16:04.076 "num_base_bdevs": 4, 00:16:04.076 "num_base_bdevs_discovered": 2, 00:16:04.076 "num_base_bdevs_operational": 2, 00:16:04.076 "base_bdevs_list": [ 00:16:04.076 { 00:16:04.076 "name": null, 00:16:04.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.076 "is_configured": false, 00:16:04.076 "data_offset": 0, 00:16:04.076 "data_size": 63488 00:16:04.076 }, 00:16:04.076 { 00:16:04.076 "name": null, 00:16:04.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.076 "is_configured": false, 00:16:04.076 "data_offset": 2048, 00:16:04.076 "data_size": 63488 00:16:04.076 }, 00:16:04.076 { 00:16:04.076 "name": "BaseBdev3", 00:16:04.076 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:16:04.076 "is_configured": true, 00:16:04.076 "data_offset": 2048, 00:16:04.076 "data_size": 63488 00:16:04.076 }, 00:16:04.076 { 00:16:04.076 "name": "BaseBdev4", 00:16:04.076 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:16:04.076 "is_configured": true, 00:16:04.076 "data_offset": 2048, 00:16:04.076 "data_size": 63488 00:16:04.076 } 00:16:04.076 ] 00:16:04.076 }' 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.076 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.076 [2024-11-26 19:05:30.695033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.076 [2024-11-26 19:05:30.695301] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:04.076 [2024-11-26 19:05:30.695328] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:04.334 request: 00:16:04.334 { 00:16:04.334 "base_bdev": "BaseBdev1", 00:16:04.334 "raid_bdev": "raid_bdev1", 00:16:04.334 "method": "bdev_raid_add_base_bdev", 00:16:04.334 "req_id": 1 00:16:04.334 } 00:16:04.334 Got JSON-RPC error response 00:16:04.334 response: 00:16:04.334 { 00:16:04.334 "code": -22, 00:16:04.334 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:04.334 } 00:16:04.334 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:04.334 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:16:04.334 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:04.334 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:04.334 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:04.334 19:05:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:05.270 19:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:05.270 19:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.270 19:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.270 19:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.270 19:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.270 19:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:05.270 19:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.270 19:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.270 19:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.270 19:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.270 19:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.270 19:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.270 19:05:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.270 19:05:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.270 19:05:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.270 19:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.270 "name": "raid_bdev1", 00:16:05.270 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:16:05.270 "strip_size_kb": 0, 00:16:05.270 "state": "online", 00:16:05.270 "raid_level": "raid1", 00:16:05.270 "superblock": true, 00:16:05.270 "num_base_bdevs": 4, 00:16:05.270 "num_base_bdevs_discovered": 2, 00:16:05.270 "num_base_bdevs_operational": 2, 00:16:05.270 "base_bdevs_list": [ 00:16:05.270 { 00:16:05.270 "name": null, 00:16:05.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.270 "is_configured": false, 00:16:05.270 "data_offset": 0, 00:16:05.270 "data_size": 63488 00:16:05.270 }, 00:16:05.270 { 00:16:05.270 "name": null, 00:16:05.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.270 "is_configured": false, 00:16:05.270 "data_offset": 2048, 00:16:05.270 "data_size": 63488 00:16:05.270 }, 00:16:05.270 { 00:16:05.270 "name": "BaseBdev3", 00:16:05.270 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:16:05.270 "is_configured": true, 00:16:05.270 "data_offset": 2048, 00:16:05.270 "data_size": 63488 00:16:05.270 }, 00:16:05.270 { 00:16:05.270 "name": "BaseBdev4", 00:16:05.270 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:16:05.270 "is_configured": true, 00:16:05.270 "data_offset": 2048, 00:16:05.270 "data_size": 63488 00:16:05.270 } 00:16:05.270 ] 00:16:05.270 }' 00:16:05.270 19:05:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.270 19:05:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.837 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:05.837 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.837 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:05.837 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:05.837 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.837 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.837 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.837 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.837 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.837 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.837 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.837 "name": "raid_bdev1", 00:16:05.837 "uuid": "b7d90fe8-9461-49aa-bfb6-fb1afedaeadd", 00:16:05.837 "strip_size_kb": 0, 00:16:05.837 "state": "online", 00:16:05.837 "raid_level": "raid1", 00:16:05.837 "superblock": true, 00:16:05.837 "num_base_bdevs": 4, 00:16:05.837 "num_base_bdevs_discovered": 2, 00:16:05.837 "num_base_bdevs_operational": 2, 00:16:05.837 "base_bdevs_list": [ 00:16:05.837 { 00:16:05.837 "name": null, 00:16:05.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.838 "is_configured": false, 00:16:05.838 "data_offset": 0, 00:16:05.838 "data_size": 63488 00:16:05.838 }, 00:16:05.838 { 00:16:05.838 "name": null, 00:16:05.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.838 "is_configured": false, 00:16:05.838 "data_offset": 2048, 00:16:05.838 "data_size": 63488 00:16:05.838 }, 00:16:05.838 { 00:16:05.838 "name": "BaseBdev3", 00:16:05.838 "uuid": "ad403413-be28-5f52-9eca-ccc48a36ff8e", 00:16:05.838 "is_configured": true, 00:16:05.838 "data_offset": 2048, 00:16:05.838 "data_size": 63488 00:16:05.838 }, 00:16:05.838 { 00:16:05.838 "name": "BaseBdev4", 00:16:05.838 "uuid": "fc62abef-9c85-55c1-b20a-006a11325e05", 00:16:05.838 "is_configured": true, 00:16:05.838 "data_offset": 2048, 00:16:05.838 "data_size": 63488 00:16:05.838 } 00:16:05.838 ] 00:16:05.838 }' 00:16:05.838 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.838 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:05.838 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.838 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:05.838 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79959 00:16:05.838 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79959 ']' 00:16:05.838 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79959 00:16:05.838 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:16:05.838 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:05.838 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79959 00:16:05.838 killing process with pid 79959 00:16:05.838 Received shutdown signal, test time was about 19.541690 seconds 00:16:05.838 00:16:05.838 Latency(us) 00:16:05.838 [2024-11-26T19:05:32.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.838 [2024-11-26T19:05:32.461Z] =================================================================================================================== 00:16:05.838 [2024-11-26T19:05:32.461Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:05.838 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:05.838 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:05.838 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79959' 00:16:05.838 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79959 00:16:05.838 [2024-11-26 19:05:32.411308] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:05.838 19:05:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79959 00:16:05.838 [2024-11-26 19:05:32.411499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.838 [2024-11-26 19:05:32.411663] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.838 [2024-11-26 19:05:32.412394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:06.404 [2024-11-26 19:05:32.830766] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:07.778 19:05:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:07.778 00:16:07.778 real 0m23.325s 00:16:07.778 user 0m31.570s 00:16:07.778 sys 0m2.494s 00:16:07.778 19:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.778 19:05:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.778 ************************************ 00:16:07.778 END TEST raid_rebuild_test_sb_io 00:16:07.778 ************************************ 00:16:07.778 19:05:34 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:07.778 19:05:34 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:16:07.778 19:05:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:07.778 19:05:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.778 19:05:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:07.778 ************************************ 00:16:07.778 START TEST raid5f_state_function_test 00:16:07.778 ************************************ 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80734 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:07.778 Process raid pid: 80734 00:16:07.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80734' 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80734 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80734 ']' 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:07.778 19:05:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.778 [2024-11-26 19:05:34.243619] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:16:07.778 [2024-11-26 19:05:34.243890] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.037 [2024-11-26 19:05:34.444945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.037 [2024-11-26 19:05:34.596111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.296 [2024-11-26 19:05:34.828547] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.296 [2024-11-26 19:05:34.828613] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.862 [2024-11-26 19:05:35.244800] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:08.862 [2024-11-26 19:05:35.244906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:08.862 [2024-11-26 19:05:35.244934] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:08.862 [2024-11-26 19:05:35.244959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:08.862 [2024-11-26 19:05:35.244975] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:08.862 [2024-11-26 19:05:35.244996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.862 "name": "Existed_Raid", 00:16:08.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.862 "strip_size_kb": 64, 00:16:08.862 "state": "configuring", 00:16:08.862 "raid_level": "raid5f", 00:16:08.862 "superblock": false, 00:16:08.862 "num_base_bdevs": 3, 00:16:08.862 "num_base_bdevs_discovered": 0, 00:16:08.862 "num_base_bdevs_operational": 3, 00:16:08.862 "base_bdevs_list": [ 00:16:08.862 { 00:16:08.862 "name": "BaseBdev1", 00:16:08.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.862 "is_configured": false, 00:16:08.862 "data_offset": 0, 00:16:08.862 "data_size": 0 00:16:08.862 }, 00:16:08.862 { 00:16:08.862 "name": "BaseBdev2", 00:16:08.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.862 "is_configured": false, 00:16:08.862 "data_offset": 0, 00:16:08.862 "data_size": 0 00:16:08.862 }, 00:16:08.862 { 00:16:08.862 "name": "BaseBdev3", 00:16:08.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.862 "is_configured": false, 00:16:08.862 "data_offset": 0, 00:16:08.862 "data_size": 0 00:16:08.862 } 00:16:08.862 ] 00:16:08.862 }' 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.862 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.428 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:09.428 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.428 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.428 [2024-11-26 19:05:35.756901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:09.428 [2024-11-26 19:05:35.756957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:09.428 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.428 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:09.428 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.428 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.428 [2024-11-26 19:05:35.764891] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:09.428 [2024-11-26 19:05:35.764958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:09.428 [2024-11-26 19:05:35.764976] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:09.428 [2024-11-26 19:05:35.764992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:09.428 [2024-11-26 19:05:35.765002] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:09.428 [2024-11-26 19:05:35.765017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:09.428 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.428 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:09.428 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.428 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.428 [2024-11-26 19:05:35.814572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.428 BaseBdev1 00:16:09.428 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.428 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:09.428 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:09.428 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:09.428 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.429 [ 00:16:09.429 { 00:16:09.429 "name": "BaseBdev1", 00:16:09.429 "aliases": [ 00:16:09.429 "9d05ff2f-28ad-4c1d-9903-3d045b8989d5" 00:16:09.429 ], 00:16:09.429 "product_name": "Malloc disk", 00:16:09.429 "block_size": 512, 00:16:09.429 "num_blocks": 65536, 00:16:09.429 "uuid": "9d05ff2f-28ad-4c1d-9903-3d045b8989d5", 00:16:09.429 "assigned_rate_limits": { 00:16:09.429 "rw_ios_per_sec": 0, 00:16:09.429 "rw_mbytes_per_sec": 0, 00:16:09.429 "r_mbytes_per_sec": 0, 00:16:09.429 "w_mbytes_per_sec": 0 00:16:09.429 }, 00:16:09.429 "claimed": true, 00:16:09.429 "claim_type": "exclusive_write", 00:16:09.429 "zoned": false, 00:16:09.429 "supported_io_types": { 00:16:09.429 "read": true, 00:16:09.429 "write": true, 00:16:09.429 "unmap": true, 00:16:09.429 "flush": true, 00:16:09.429 "reset": true, 00:16:09.429 "nvme_admin": false, 00:16:09.429 "nvme_io": false, 00:16:09.429 "nvme_io_md": false, 00:16:09.429 "write_zeroes": true, 00:16:09.429 "zcopy": true, 00:16:09.429 "get_zone_info": false, 00:16:09.429 "zone_management": false, 00:16:09.429 "zone_append": false, 00:16:09.429 "compare": false, 00:16:09.429 "compare_and_write": false, 00:16:09.429 "abort": true, 00:16:09.429 "seek_hole": false, 00:16:09.429 "seek_data": false, 00:16:09.429 "copy": true, 00:16:09.429 "nvme_iov_md": false 00:16:09.429 }, 00:16:09.429 "memory_domains": [ 00:16:09.429 { 00:16:09.429 "dma_device_id": "system", 00:16:09.429 "dma_device_type": 1 00:16:09.429 }, 00:16:09.429 { 00:16:09.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.429 "dma_device_type": 2 00:16:09.429 } 00:16:09.429 ], 00:16:09.429 "driver_specific": {} 00:16:09.429 } 00:16:09.429 ] 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.429 "name": "Existed_Raid", 00:16:09.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.429 "strip_size_kb": 64, 00:16:09.429 "state": "configuring", 00:16:09.429 "raid_level": "raid5f", 00:16:09.429 "superblock": false, 00:16:09.429 "num_base_bdevs": 3, 00:16:09.429 "num_base_bdevs_discovered": 1, 00:16:09.429 "num_base_bdevs_operational": 3, 00:16:09.429 "base_bdevs_list": [ 00:16:09.429 { 00:16:09.429 "name": "BaseBdev1", 00:16:09.429 "uuid": "9d05ff2f-28ad-4c1d-9903-3d045b8989d5", 00:16:09.429 "is_configured": true, 00:16:09.429 "data_offset": 0, 00:16:09.429 "data_size": 65536 00:16:09.429 }, 00:16:09.429 { 00:16:09.429 "name": "BaseBdev2", 00:16:09.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.429 "is_configured": false, 00:16:09.429 "data_offset": 0, 00:16:09.429 "data_size": 0 00:16:09.429 }, 00:16:09.429 { 00:16:09.429 "name": "BaseBdev3", 00:16:09.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.429 "is_configured": false, 00:16:09.429 "data_offset": 0, 00:16:09.429 "data_size": 0 00:16:09.429 } 00:16:09.429 ] 00:16:09.429 }' 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.429 19:05:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.995 [2024-11-26 19:05:36.386890] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:09.995 [2024-11-26 19:05:36.386990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.995 [2024-11-26 19:05:36.399094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.995 [2024-11-26 19:05:36.404782] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:09.995 [2024-11-26 19:05:36.405077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:09.995 [2024-11-26 19:05:36.405257] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:09.995 [2024-11-26 19:05:36.405484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.995 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.995 "name": "Existed_Raid", 00:16:09.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.995 "strip_size_kb": 64, 00:16:09.995 "state": "configuring", 00:16:09.995 "raid_level": "raid5f", 00:16:09.995 "superblock": false, 00:16:09.995 "num_base_bdevs": 3, 00:16:09.995 "num_base_bdevs_discovered": 1, 00:16:09.995 "num_base_bdevs_operational": 3, 00:16:09.995 "base_bdevs_list": [ 00:16:09.995 { 00:16:09.996 "name": "BaseBdev1", 00:16:09.996 "uuid": "9d05ff2f-28ad-4c1d-9903-3d045b8989d5", 00:16:09.996 "is_configured": true, 00:16:09.996 "data_offset": 0, 00:16:09.996 "data_size": 65536 00:16:09.996 }, 00:16:09.996 { 00:16:09.996 "name": "BaseBdev2", 00:16:09.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.996 "is_configured": false, 00:16:09.996 "data_offset": 0, 00:16:09.996 "data_size": 0 00:16:09.996 }, 00:16:09.996 { 00:16:09.996 "name": "BaseBdev3", 00:16:09.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.996 "is_configured": false, 00:16:09.996 "data_offset": 0, 00:16:09.996 "data_size": 0 00:16:09.996 } 00:16:09.996 ] 00:16:09.996 }' 00:16:09.996 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.996 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.563 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:10.563 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.563 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.563 [2024-11-26 19:05:36.924324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.563 BaseBdev2 00:16:10.563 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.563 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:10.563 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:10.563 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:10.563 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:10.563 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:10.563 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:10.563 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:10.563 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.563 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.563 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.563 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:10.563 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.564 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.564 [ 00:16:10.564 { 00:16:10.564 "name": "BaseBdev2", 00:16:10.564 "aliases": [ 00:16:10.564 "f8f2706d-8365-4188-a81e-5eb62d1a8470" 00:16:10.564 ], 00:16:10.564 "product_name": "Malloc disk", 00:16:10.564 "block_size": 512, 00:16:10.564 "num_blocks": 65536, 00:16:10.564 "uuid": "f8f2706d-8365-4188-a81e-5eb62d1a8470", 00:16:10.564 "assigned_rate_limits": { 00:16:10.564 "rw_ios_per_sec": 0, 00:16:10.564 "rw_mbytes_per_sec": 0, 00:16:10.564 "r_mbytes_per_sec": 0, 00:16:10.564 "w_mbytes_per_sec": 0 00:16:10.564 }, 00:16:10.564 "claimed": true, 00:16:10.564 "claim_type": "exclusive_write", 00:16:10.564 "zoned": false, 00:16:10.564 "supported_io_types": { 00:16:10.564 "read": true, 00:16:10.564 "write": true, 00:16:10.564 "unmap": true, 00:16:10.564 "flush": true, 00:16:10.564 "reset": true, 00:16:10.564 "nvme_admin": false, 00:16:10.564 "nvme_io": false, 00:16:10.564 "nvme_io_md": false, 00:16:10.564 "write_zeroes": true, 00:16:10.564 "zcopy": true, 00:16:10.564 "get_zone_info": false, 00:16:10.564 "zone_management": false, 00:16:10.564 "zone_append": false, 00:16:10.564 "compare": false, 00:16:10.564 "compare_and_write": false, 00:16:10.564 "abort": true, 00:16:10.564 "seek_hole": false, 00:16:10.564 "seek_data": false, 00:16:10.564 "copy": true, 00:16:10.564 "nvme_iov_md": false 00:16:10.564 }, 00:16:10.564 "memory_domains": [ 00:16:10.564 { 00:16:10.564 "dma_device_id": "system", 00:16:10.564 "dma_device_type": 1 00:16:10.564 }, 00:16:10.564 { 00:16:10.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.564 "dma_device_type": 2 00:16:10.564 } 00:16:10.564 ], 00:16:10.564 "driver_specific": {} 00:16:10.564 } 00:16:10.564 ] 00:16:10.564 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.564 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:10.564 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:10.564 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:10.564 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:10.564 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.564 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.564 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.564 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.564 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:10.564 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.564 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.564 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.564 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.564 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.564 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.564 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.564 19:05:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.564 19:05:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.564 19:05:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.564 "name": "Existed_Raid", 00:16:10.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.564 "strip_size_kb": 64, 00:16:10.564 "state": "configuring", 00:16:10.564 "raid_level": "raid5f", 00:16:10.564 "superblock": false, 00:16:10.564 "num_base_bdevs": 3, 00:16:10.564 "num_base_bdevs_discovered": 2, 00:16:10.564 "num_base_bdevs_operational": 3, 00:16:10.564 "base_bdevs_list": [ 00:16:10.564 { 00:16:10.564 "name": "BaseBdev1", 00:16:10.564 "uuid": "9d05ff2f-28ad-4c1d-9903-3d045b8989d5", 00:16:10.564 "is_configured": true, 00:16:10.564 "data_offset": 0, 00:16:10.564 "data_size": 65536 00:16:10.564 }, 00:16:10.564 { 00:16:10.564 "name": "BaseBdev2", 00:16:10.564 "uuid": "f8f2706d-8365-4188-a81e-5eb62d1a8470", 00:16:10.564 "is_configured": true, 00:16:10.564 "data_offset": 0, 00:16:10.564 "data_size": 65536 00:16:10.564 }, 00:16:10.564 { 00:16:10.564 "name": "BaseBdev3", 00:16:10.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.564 "is_configured": false, 00:16:10.564 "data_offset": 0, 00:16:10.564 "data_size": 0 00:16:10.564 } 00:16:10.564 ] 00:16:10.564 }' 00:16:10.564 19:05:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.564 19:05:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.823 19:05:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:10.823 19:05:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.823 19:05:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.082 [2024-11-26 19:05:37.493733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:11.082 [2024-11-26 19:05:37.493819] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:11.082 [2024-11-26 19:05:37.493843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:11.082 [2024-11-26 19:05:37.494391] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:11.082 [2024-11-26 19:05:37.499755] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:11.082 BaseBdev3 00:16:11.082 [2024-11-26 19:05:37.499921] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:11.082 [2024-11-26 19:05:37.500343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.082 [ 00:16:11.082 { 00:16:11.082 "name": "BaseBdev3", 00:16:11.082 "aliases": [ 00:16:11.082 "889fb8a5-592a-465d-906e-79775d36a7b7" 00:16:11.082 ], 00:16:11.082 "product_name": "Malloc disk", 00:16:11.082 "block_size": 512, 00:16:11.082 "num_blocks": 65536, 00:16:11.082 "uuid": "889fb8a5-592a-465d-906e-79775d36a7b7", 00:16:11.082 "assigned_rate_limits": { 00:16:11.082 "rw_ios_per_sec": 0, 00:16:11.082 "rw_mbytes_per_sec": 0, 00:16:11.082 "r_mbytes_per_sec": 0, 00:16:11.082 "w_mbytes_per_sec": 0 00:16:11.082 }, 00:16:11.082 "claimed": true, 00:16:11.082 "claim_type": "exclusive_write", 00:16:11.082 "zoned": false, 00:16:11.082 "supported_io_types": { 00:16:11.082 "read": true, 00:16:11.082 "write": true, 00:16:11.082 "unmap": true, 00:16:11.082 "flush": true, 00:16:11.082 "reset": true, 00:16:11.082 "nvme_admin": false, 00:16:11.082 "nvme_io": false, 00:16:11.082 "nvme_io_md": false, 00:16:11.082 "write_zeroes": true, 00:16:11.082 "zcopy": true, 00:16:11.082 "get_zone_info": false, 00:16:11.082 "zone_management": false, 00:16:11.082 "zone_append": false, 00:16:11.082 "compare": false, 00:16:11.082 "compare_and_write": false, 00:16:11.082 "abort": true, 00:16:11.082 "seek_hole": false, 00:16:11.082 "seek_data": false, 00:16:11.082 "copy": true, 00:16:11.082 "nvme_iov_md": false 00:16:11.082 }, 00:16:11.082 "memory_domains": [ 00:16:11.082 { 00:16:11.082 "dma_device_id": "system", 00:16:11.082 "dma_device_type": 1 00:16:11.082 }, 00:16:11.082 { 00:16:11.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.082 "dma_device_type": 2 00:16:11.082 } 00:16:11.082 ], 00:16:11.082 "driver_specific": {} 00:16:11.082 } 00:16:11.082 ] 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.082 19:05:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.083 19:05:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.083 19:05:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.083 19:05:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.083 "name": "Existed_Raid", 00:16:11.083 "uuid": "d58253f8-567e-4407-9759-c72d0ae78eff", 00:16:11.083 "strip_size_kb": 64, 00:16:11.083 "state": "online", 00:16:11.083 "raid_level": "raid5f", 00:16:11.083 "superblock": false, 00:16:11.083 "num_base_bdevs": 3, 00:16:11.083 "num_base_bdevs_discovered": 3, 00:16:11.083 "num_base_bdevs_operational": 3, 00:16:11.083 "base_bdevs_list": [ 00:16:11.083 { 00:16:11.083 "name": "BaseBdev1", 00:16:11.083 "uuid": "9d05ff2f-28ad-4c1d-9903-3d045b8989d5", 00:16:11.083 "is_configured": true, 00:16:11.083 "data_offset": 0, 00:16:11.083 "data_size": 65536 00:16:11.083 }, 00:16:11.083 { 00:16:11.083 "name": "BaseBdev2", 00:16:11.083 "uuid": "f8f2706d-8365-4188-a81e-5eb62d1a8470", 00:16:11.083 "is_configured": true, 00:16:11.083 "data_offset": 0, 00:16:11.083 "data_size": 65536 00:16:11.083 }, 00:16:11.083 { 00:16:11.083 "name": "BaseBdev3", 00:16:11.083 "uuid": "889fb8a5-592a-465d-906e-79775d36a7b7", 00:16:11.083 "is_configured": true, 00:16:11.083 "data_offset": 0, 00:16:11.083 "data_size": 65536 00:16:11.083 } 00:16:11.083 ] 00:16:11.083 }' 00:16:11.083 19:05:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.083 19:05:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.711 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:11.711 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:11.711 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:11.711 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:11.711 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:11.711 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:11.711 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:11.711 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:11.711 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.711 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.711 [2024-11-26 19:05:38.034870] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.711 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.711 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:11.711 "name": "Existed_Raid", 00:16:11.711 "aliases": [ 00:16:11.711 "d58253f8-567e-4407-9759-c72d0ae78eff" 00:16:11.711 ], 00:16:11.711 "product_name": "Raid Volume", 00:16:11.711 "block_size": 512, 00:16:11.711 "num_blocks": 131072, 00:16:11.711 "uuid": "d58253f8-567e-4407-9759-c72d0ae78eff", 00:16:11.711 "assigned_rate_limits": { 00:16:11.711 "rw_ios_per_sec": 0, 00:16:11.711 "rw_mbytes_per_sec": 0, 00:16:11.711 "r_mbytes_per_sec": 0, 00:16:11.711 "w_mbytes_per_sec": 0 00:16:11.711 }, 00:16:11.711 "claimed": false, 00:16:11.711 "zoned": false, 00:16:11.711 "supported_io_types": { 00:16:11.711 "read": true, 00:16:11.711 "write": true, 00:16:11.711 "unmap": false, 00:16:11.711 "flush": false, 00:16:11.711 "reset": true, 00:16:11.711 "nvme_admin": false, 00:16:11.711 "nvme_io": false, 00:16:11.711 "nvme_io_md": false, 00:16:11.711 "write_zeroes": true, 00:16:11.711 "zcopy": false, 00:16:11.711 "get_zone_info": false, 00:16:11.711 "zone_management": false, 00:16:11.711 "zone_append": false, 00:16:11.711 "compare": false, 00:16:11.711 "compare_and_write": false, 00:16:11.711 "abort": false, 00:16:11.711 "seek_hole": false, 00:16:11.711 "seek_data": false, 00:16:11.711 "copy": false, 00:16:11.711 "nvme_iov_md": false 00:16:11.711 }, 00:16:11.711 "driver_specific": { 00:16:11.711 "raid": { 00:16:11.711 "uuid": "d58253f8-567e-4407-9759-c72d0ae78eff", 00:16:11.711 "strip_size_kb": 64, 00:16:11.711 "state": "online", 00:16:11.711 "raid_level": "raid5f", 00:16:11.711 "superblock": false, 00:16:11.711 "num_base_bdevs": 3, 00:16:11.711 "num_base_bdevs_discovered": 3, 00:16:11.711 "num_base_bdevs_operational": 3, 00:16:11.711 "base_bdevs_list": [ 00:16:11.711 { 00:16:11.711 "name": "BaseBdev1", 00:16:11.711 "uuid": "9d05ff2f-28ad-4c1d-9903-3d045b8989d5", 00:16:11.711 "is_configured": true, 00:16:11.711 "data_offset": 0, 00:16:11.711 "data_size": 65536 00:16:11.711 }, 00:16:11.711 { 00:16:11.711 "name": "BaseBdev2", 00:16:11.711 "uuid": "f8f2706d-8365-4188-a81e-5eb62d1a8470", 00:16:11.711 "is_configured": true, 00:16:11.711 "data_offset": 0, 00:16:11.712 "data_size": 65536 00:16:11.712 }, 00:16:11.712 { 00:16:11.712 "name": "BaseBdev3", 00:16:11.712 "uuid": "889fb8a5-592a-465d-906e-79775d36a7b7", 00:16:11.712 "is_configured": true, 00:16:11.712 "data_offset": 0, 00:16:11.712 "data_size": 65536 00:16:11.712 } 00:16:11.712 ] 00:16:11.712 } 00:16:11.712 } 00:16:11.712 }' 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:11.712 BaseBdev2 00:16:11.712 BaseBdev3' 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.712 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.712 [2024-11-26 19:05:38.310790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:11.970 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.970 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:11.970 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:11.970 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:11.970 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:11.970 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:11.970 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:11.970 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.970 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.970 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.970 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.970 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:11.970 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.970 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.970 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.970 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.970 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.970 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.970 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.970 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.970 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.970 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.970 "name": "Existed_Raid", 00:16:11.970 "uuid": "d58253f8-567e-4407-9759-c72d0ae78eff", 00:16:11.970 "strip_size_kb": 64, 00:16:11.970 "state": "online", 00:16:11.970 "raid_level": "raid5f", 00:16:11.970 "superblock": false, 00:16:11.970 "num_base_bdevs": 3, 00:16:11.970 "num_base_bdevs_discovered": 2, 00:16:11.970 "num_base_bdevs_operational": 2, 00:16:11.970 "base_bdevs_list": [ 00:16:11.970 { 00:16:11.970 "name": null, 00:16:11.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.970 "is_configured": false, 00:16:11.970 "data_offset": 0, 00:16:11.970 "data_size": 65536 00:16:11.970 }, 00:16:11.970 { 00:16:11.970 "name": "BaseBdev2", 00:16:11.971 "uuid": "f8f2706d-8365-4188-a81e-5eb62d1a8470", 00:16:11.971 "is_configured": true, 00:16:11.971 "data_offset": 0, 00:16:11.971 "data_size": 65536 00:16:11.971 }, 00:16:11.971 { 00:16:11.971 "name": "BaseBdev3", 00:16:11.971 "uuid": "889fb8a5-592a-465d-906e-79775d36a7b7", 00:16:11.971 "is_configured": true, 00:16:11.971 "data_offset": 0, 00:16:11.971 "data_size": 65536 00:16:11.971 } 00:16:11.971 ] 00:16:11.971 }' 00:16:11.971 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.971 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.536 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:12.536 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:12.536 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.536 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:12.536 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.536 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.536 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.536 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:12.536 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:12.536 19:05:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:12.536 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.536 19:05:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.536 [2024-11-26 19:05:38.936952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:12.536 [2024-11-26 19:05:38.937256] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:12.536 [2024-11-26 19:05:39.032012] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.536 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.536 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:12.536 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:12.536 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.536 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:12.536 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.536 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.536 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.536 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:12.536 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:12.536 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:12.536 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.536 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.536 [2024-11-26 19:05:39.096090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:12.536 [2024-11-26 19:05:39.096307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:12.793 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.793 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:12.793 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:12.793 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.793 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.793 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.793 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:12.793 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.793 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:12.793 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:12.793 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:12.793 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:12.793 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:12.793 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:12.793 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.793 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.793 BaseBdev2 00:16:12.793 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.793 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:12.793 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.794 [ 00:16:12.794 { 00:16:12.794 "name": "BaseBdev2", 00:16:12.794 "aliases": [ 00:16:12.794 "0d48129c-99ca-429d-9852-ae236c1b1b99" 00:16:12.794 ], 00:16:12.794 "product_name": "Malloc disk", 00:16:12.794 "block_size": 512, 00:16:12.794 "num_blocks": 65536, 00:16:12.794 "uuid": "0d48129c-99ca-429d-9852-ae236c1b1b99", 00:16:12.794 "assigned_rate_limits": { 00:16:12.794 "rw_ios_per_sec": 0, 00:16:12.794 "rw_mbytes_per_sec": 0, 00:16:12.794 "r_mbytes_per_sec": 0, 00:16:12.794 "w_mbytes_per_sec": 0 00:16:12.794 }, 00:16:12.794 "claimed": false, 00:16:12.794 "zoned": false, 00:16:12.794 "supported_io_types": { 00:16:12.794 "read": true, 00:16:12.794 "write": true, 00:16:12.794 "unmap": true, 00:16:12.794 "flush": true, 00:16:12.794 "reset": true, 00:16:12.794 "nvme_admin": false, 00:16:12.794 "nvme_io": false, 00:16:12.794 "nvme_io_md": false, 00:16:12.794 "write_zeroes": true, 00:16:12.794 "zcopy": true, 00:16:12.794 "get_zone_info": false, 00:16:12.794 "zone_management": false, 00:16:12.794 "zone_append": false, 00:16:12.794 "compare": false, 00:16:12.794 "compare_and_write": false, 00:16:12.794 "abort": true, 00:16:12.794 "seek_hole": false, 00:16:12.794 "seek_data": false, 00:16:12.794 "copy": true, 00:16:12.794 "nvme_iov_md": false 00:16:12.794 }, 00:16:12.794 "memory_domains": [ 00:16:12.794 { 00:16:12.794 "dma_device_id": "system", 00:16:12.794 "dma_device_type": 1 00:16:12.794 }, 00:16:12.794 { 00:16:12.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.794 "dma_device_type": 2 00:16:12.794 } 00:16:12.794 ], 00:16:12.794 "driver_specific": {} 00:16:12.794 } 00:16:12.794 ] 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.794 BaseBdev3 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.794 [ 00:16:12.794 { 00:16:12.794 "name": "BaseBdev3", 00:16:12.794 "aliases": [ 00:16:12.794 "b0c15033-ff88-4d43-b952-eab558a227f3" 00:16:12.794 ], 00:16:12.794 "product_name": "Malloc disk", 00:16:12.794 "block_size": 512, 00:16:12.794 "num_blocks": 65536, 00:16:12.794 "uuid": "b0c15033-ff88-4d43-b952-eab558a227f3", 00:16:12.794 "assigned_rate_limits": { 00:16:12.794 "rw_ios_per_sec": 0, 00:16:12.794 "rw_mbytes_per_sec": 0, 00:16:12.794 "r_mbytes_per_sec": 0, 00:16:12.794 "w_mbytes_per_sec": 0 00:16:12.794 }, 00:16:12.794 "claimed": false, 00:16:12.794 "zoned": false, 00:16:12.794 "supported_io_types": { 00:16:12.794 "read": true, 00:16:12.794 "write": true, 00:16:12.794 "unmap": true, 00:16:12.794 "flush": true, 00:16:12.794 "reset": true, 00:16:12.794 "nvme_admin": false, 00:16:12.794 "nvme_io": false, 00:16:12.794 "nvme_io_md": false, 00:16:12.794 "write_zeroes": true, 00:16:12.794 "zcopy": true, 00:16:12.794 "get_zone_info": false, 00:16:12.794 "zone_management": false, 00:16:12.794 "zone_append": false, 00:16:12.794 "compare": false, 00:16:12.794 "compare_and_write": false, 00:16:12.794 "abort": true, 00:16:12.794 "seek_hole": false, 00:16:12.794 "seek_data": false, 00:16:12.794 "copy": true, 00:16:12.794 "nvme_iov_md": false 00:16:12.794 }, 00:16:12.794 "memory_domains": [ 00:16:12.794 { 00:16:12.794 "dma_device_id": "system", 00:16:12.794 "dma_device_type": 1 00:16:12.794 }, 00:16:12.794 { 00:16:12.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.794 "dma_device_type": 2 00:16:12.794 } 00:16:12.794 ], 00:16:12.794 "driver_specific": {} 00:16:12.794 } 00:16:12.794 ] 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.794 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.794 [2024-11-26 19:05:39.412766] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:12.794 [2024-11-26 19:05:39.412827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:12.794 [2024-11-26 19:05:39.412876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:13.051 [2024-11-26 19:05:39.415497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:13.052 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.052 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:13.052 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.052 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.052 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.052 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.052 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:13.052 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.052 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.052 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.052 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.052 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.052 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.052 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.052 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.052 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.052 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.052 "name": "Existed_Raid", 00:16:13.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.052 "strip_size_kb": 64, 00:16:13.052 "state": "configuring", 00:16:13.052 "raid_level": "raid5f", 00:16:13.052 "superblock": false, 00:16:13.052 "num_base_bdevs": 3, 00:16:13.052 "num_base_bdevs_discovered": 2, 00:16:13.052 "num_base_bdevs_operational": 3, 00:16:13.052 "base_bdevs_list": [ 00:16:13.052 { 00:16:13.052 "name": "BaseBdev1", 00:16:13.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.052 "is_configured": false, 00:16:13.052 "data_offset": 0, 00:16:13.052 "data_size": 0 00:16:13.052 }, 00:16:13.052 { 00:16:13.052 "name": "BaseBdev2", 00:16:13.052 "uuid": "0d48129c-99ca-429d-9852-ae236c1b1b99", 00:16:13.052 "is_configured": true, 00:16:13.052 "data_offset": 0, 00:16:13.052 "data_size": 65536 00:16:13.052 }, 00:16:13.052 { 00:16:13.052 "name": "BaseBdev3", 00:16:13.052 "uuid": "b0c15033-ff88-4d43-b952-eab558a227f3", 00:16:13.052 "is_configured": true, 00:16:13.052 "data_offset": 0, 00:16:13.052 "data_size": 65536 00:16:13.052 } 00:16:13.052 ] 00:16:13.052 }' 00:16:13.052 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.052 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.618 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:13.618 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.618 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.618 [2024-11-26 19:05:39.940946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:13.618 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.618 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:13.618 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.618 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.618 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.618 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.618 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:13.618 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.618 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.618 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.618 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.618 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.618 19:05:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.618 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.618 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.618 19:05:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.618 19:05:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.618 "name": "Existed_Raid", 00:16:13.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.618 "strip_size_kb": 64, 00:16:13.618 "state": "configuring", 00:16:13.618 "raid_level": "raid5f", 00:16:13.618 "superblock": false, 00:16:13.618 "num_base_bdevs": 3, 00:16:13.618 "num_base_bdevs_discovered": 1, 00:16:13.618 "num_base_bdevs_operational": 3, 00:16:13.618 "base_bdevs_list": [ 00:16:13.618 { 00:16:13.618 "name": "BaseBdev1", 00:16:13.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.618 "is_configured": false, 00:16:13.618 "data_offset": 0, 00:16:13.618 "data_size": 0 00:16:13.618 }, 00:16:13.618 { 00:16:13.618 "name": null, 00:16:13.618 "uuid": "0d48129c-99ca-429d-9852-ae236c1b1b99", 00:16:13.618 "is_configured": false, 00:16:13.618 "data_offset": 0, 00:16:13.618 "data_size": 65536 00:16:13.618 }, 00:16:13.619 { 00:16:13.619 "name": "BaseBdev3", 00:16:13.619 "uuid": "b0c15033-ff88-4d43-b952-eab558a227f3", 00:16:13.619 "is_configured": true, 00:16:13.619 "data_offset": 0, 00:16:13.619 "data_size": 65536 00:16:13.619 } 00:16:13.619 ] 00:16:13.619 }' 00:16:13.619 19:05:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.619 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.876 19:05:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:13.876 19:05:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.876 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.876 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.134 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.134 19:05:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:14.134 19:05:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:14.134 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.134 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.134 [2024-11-26 19:05:40.583712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.134 BaseBdev1 00:16:14.134 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.134 19:05:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:14.134 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:14.134 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:14.134 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:14.134 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:14.134 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:14.134 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:14.134 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.134 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.134 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.134 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:14.134 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.134 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.134 [ 00:16:14.134 { 00:16:14.134 "name": "BaseBdev1", 00:16:14.134 "aliases": [ 00:16:14.134 "59af5d30-8677-4b60-936b-526f49cd9b8d" 00:16:14.134 ], 00:16:14.134 "product_name": "Malloc disk", 00:16:14.134 "block_size": 512, 00:16:14.134 "num_blocks": 65536, 00:16:14.134 "uuid": "59af5d30-8677-4b60-936b-526f49cd9b8d", 00:16:14.134 "assigned_rate_limits": { 00:16:14.134 "rw_ios_per_sec": 0, 00:16:14.134 "rw_mbytes_per_sec": 0, 00:16:14.135 "r_mbytes_per_sec": 0, 00:16:14.135 "w_mbytes_per_sec": 0 00:16:14.135 }, 00:16:14.135 "claimed": true, 00:16:14.135 "claim_type": "exclusive_write", 00:16:14.135 "zoned": false, 00:16:14.135 "supported_io_types": { 00:16:14.135 "read": true, 00:16:14.135 "write": true, 00:16:14.135 "unmap": true, 00:16:14.135 "flush": true, 00:16:14.135 "reset": true, 00:16:14.135 "nvme_admin": false, 00:16:14.135 "nvme_io": false, 00:16:14.135 "nvme_io_md": false, 00:16:14.135 "write_zeroes": true, 00:16:14.135 "zcopy": true, 00:16:14.135 "get_zone_info": false, 00:16:14.135 "zone_management": false, 00:16:14.135 "zone_append": false, 00:16:14.135 "compare": false, 00:16:14.135 "compare_and_write": false, 00:16:14.135 "abort": true, 00:16:14.135 "seek_hole": false, 00:16:14.135 "seek_data": false, 00:16:14.135 "copy": true, 00:16:14.135 "nvme_iov_md": false 00:16:14.135 }, 00:16:14.135 "memory_domains": [ 00:16:14.135 { 00:16:14.135 "dma_device_id": "system", 00:16:14.135 "dma_device_type": 1 00:16:14.135 }, 00:16:14.135 { 00:16:14.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.135 "dma_device_type": 2 00:16:14.135 } 00:16:14.135 ], 00:16:14.135 "driver_specific": {} 00:16:14.135 } 00:16:14.135 ] 00:16:14.135 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.135 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:14.135 19:05:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:14.135 19:05:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.135 19:05:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.135 19:05:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.135 19:05:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.135 19:05:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:14.135 19:05:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.135 19:05:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.135 19:05:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.135 19:05:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.135 19:05:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.135 19:05:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.135 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.135 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.135 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.135 19:05:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.135 "name": "Existed_Raid", 00:16:14.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.135 "strip_size_kb": 64, 00:16:14.135 "state": "configuring", 00:16:14.135 "raid_level": "raid5f", 00:16:14.135 "superblock": false, 00:16:14.135 "num_base_bdevs": 3, 00:16:14.135 "num_base_bdevs_discovered": 2, 00:16:14.135 "num_base_bdevs_operational": 3, 00:16:14.135 "base_bdevs_list": [ 00:16:14.135 { 00:16:14.135 "name": "BaseBdev1", 00:16:14.135 "uuid": "59af5d30-8677-4b60-936b-526f49cd9b8d", 00:16:14.135 "is_configured": true, 00:16:14.135 "data_offset": 0, 00:16:14.135 "data_size": 65536 00:16:14.135 }, 00:16:14.135 { 00:16:14.135 "name": null, 00:16:14.135 "uuid": "0d48129c-99ca-429d-9852-ae236c1b1b99", 00:16:14.135 "is_configured": false, 00:16:14.135 "data_offset": 0, 00:16:14.135 "data_size": 65536 00:16:14.135 }, 00:16:14.135 { 00:16:14.135 "name": "BaseBdev3", 00:16:14.135 "uuid": "b0c15033-ff88-4d43-b952-eab558a227f3", 00:16:14.135 "is_configured": true, 00:16:14.135 "data_offset": 0, 00:16:14.135 "data_size": 65536 00:16:14.135 } 00:16:14.135 ] 00:16:14.135 }' 00:16:14.135 19:05:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.135 19:05:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.702 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.702 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:14.702 19:05:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.702 19:05:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.702 19:05:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.702 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:14.702 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:14.702 19:05:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.702 19:05:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.702 [2024-11-26 19:05:41.231971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:14.702 19:05:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.703 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:14.703 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.703 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.703 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.703 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.703 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:14.703 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.703 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.703 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.703 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.703 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.703 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.703 19:05:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.703 19:05:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.703 19:05:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.703 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.703 "name": "Existed_Raid", 00:16:14.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.703 "strip_size_kb": 64, 00:16:14.703 "state": "configuring", 00:16:14.703 "raid_level": "raid5f", 00:16:14.703 "superblock": false, 00:16:14.703 "num_base_bdevs": 3, 00:16:14.703 "num_base_bdevs_discovered": 1, 00:16:14.703 "num_base_bdevs_operational": 3, 00:16:14.703 "base_bdevs_list": [ 00:16:14.703 { 00:16:14.703 "name": "BaseBdev1", 00:16:14.703 "uuid": "59af5d30-8677-4b60-936b-526f49cd9b8d", 00:16:14.703 "is_configured": true, 00:16:14.703 "data_offset": 0, 00:16:14.703 "data_size": 65536 00:16:14.703 }, 00:16:14.703 { 00:16:14.703 "name": null, 00:16:14.703 "uuid": "0d48129c-99ca-429d-9852-ae236c1b1b99", 00:16:14.703 "is_configured": false, 00:16:14.703 "data_offset": 0, 00:16:14.703 "data_size": 65536 00:16:14.703 }, 00:16:14.703 { 00:16:14.703 "name": null, 00:16:14.703 "uuid": "b0c15033-ff88-4d43-b952-eab558a227f3", 00:16:14.703 "is_configured": false, 00:16:14.703 "data_offset": 0, 00:16:14.703 "data_size": 65536 00:16:14.703 } 00:16:14.703 ] 00:16:14.703 }' 00:16:14.703 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.703 19:05:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.268 [2024-11-26 19:05:41.844229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.268 19:05:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.526 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.526 "name": "Existed_Raid", 00:16:15.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.526 "strip_size_kb": 64, 00:16:15.526 "state": "configuring", 00:16:15.526 "raid_level": "raid5f", 00:16:15.526 "superblock": false, 00:16:15.526 "num_base_bdevs": 3, 00:16:15.526 "num_base_bdevs_discovered": 2, 00:16:15.526 "num_base_bdevs_operational": 3, 00:16:15.526 "base_bdevs_list": [ 00:16:15.526 { 00:16:15.526 "name": "BaseBdev1", 00:16:15.526 "uuid": "59af5d30-8677-4b60-936b-526f49cd9b8d", 00:16:15.526 "is_configured": true, 00:16:15.526 "data_offset": 0, 00:16:15.526 "data_size": 65536 00:16:15.526 }, 00:16:15.526 { 00:16:15.526 "name": null, 00:16:15.526 "uuid": "0d48129c-99ca-429d-9852-ae236c1b1b99", 00:16:15.526 "is_configured": false, 00:16:15.526 "data_offset": 0, 00:16:15.526 "data_size": 65536 00:16:15.526 }, 00:16:15.526 { 00:16:15.526 "name": "BaseBdev3", 00:16:15.526 "uuid": "b0c15033-ff88-4d43-b952-eab558a227f3", 00:16:15.526 "is_configured": true, 00:16:15.526 "data_offset": 0, 00:16:15.526 "data_size": 65536 00:16:15.526 } 00:16:15.526 ] 00:16:15.526 }' 00:16:15.526 19:05:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.526 19:05:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.091 [2024-11-26 19:05:42.500451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.091 19:05:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.091 "name": "Existed_Raid", 00:16:16.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.091 "strip_size_kb": 64, 00:16:16.091 "state": "configuring", 00:16:16.091 "raid_level": "raid5f", 00:16:16.091 "superblock": false, 00:16:16.091 "num_base_bdevs": 3, 00:16:16.091 "num_base_bdevs_discovered": 1, 00:16:16.091 "num_base_bdevs_operational": 3, 00:16:16.091 "base_bdevs_list": [ 00:16:16.091 { 00:16:16.091 "name": null, 00:16:16.091 "uuid": "59af5d30-8677-4b60-936b-526f49cd9b8d", 00:16:16.091 "is_configured": false, 00:16:16.091 "data_offset": 0, 00:16:16.091 "data_size": 65536 00:16:16.091 }, 00:16:16.092 { 00:16:16.092 "name": null, 00:16:16.092 "uuid": "0d48129c-99ca-429d-9852-ae236c1b1b99", 00:16:16.092 "is_configured": false, 00:16:16.092 "data_offset": 0, 00:16:16.092 "data_size": 65536 00:16:16.092 }, 00:16:16.092 { 00:16:16.092 "name": "BaseBdev3", 00:16:16.092 "uuid": "b0c15033-ff88-4d43-b952-eab558a227f3", 00:16:16.092 "is_configured": true, 00:16:16.092 "data_offset": 0, 00:16:16.092 "data_size": 65536 00:16:16.092 } 00:16:16.092 ] 00:16:16.092 }' 00:16:16.092 19:05:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.092 19:05:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.705 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:16.705 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.705 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.705 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.705 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.705 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:16.705 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:16.705 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.705 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.705 [2024-11-26 19:05:43.209190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:16.705 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.705 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:16.705 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.705 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.705 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.706 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.706 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.706 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.706 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.706 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.706 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.706 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.706 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.706 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.706 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.706 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.706 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.706 "name": "Existed_Raid", 00:16:16.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.706 "strip_size_kb": 64, 00:16:16.706 "state": "configuring", 00:16:16.706 "raid_level": "raid5f", 00:16:16.706 "superblock": false, 00:16:16.706 "num_base_bdevs": 3, 00:16:16.706 "num_base_bdevs_discovered": 2, 00:16:16.706 "num_base_bdevs_operational": 3, 00:16:16.706 "base_bdevs_list": [ 00:16:16.706 { 00:16:16.706 "name": null, 00:16:16.706 "uuid": "59af5d30-8677-4b60-936b-526f49cd9b8d", 00:16:16.706 "is_configured": false, 00:16:16.706 "data_offset": 0, 00:16:16.706 "data_size": 65536 00:16:16.706 }, 00:16:16.706 { 00:16:16.706 "name": "BaseBdev2", 00:16:16.706 "uuid": "0d48129c-99ca-429d-9852-ae236c1b1b99", 00:16:16.706 "is_configured": true, 00:16:16.706 "data_offset": 0, 00:16:16.706 "data_size": 65536 00:16:16.706 }, 00:16:16.706 { 00:16:16.706 "name": "BaseBdev3", 00:16:16.706 "uuid": "b0c15033-ff88-4d43-b952-eab558a227f3", 00:16:16.706 "is_configured": true, 00:16:16.706 "data_offset": 0, 00:16:16.706 "data_size": 65536 00:16:16.706 } 00:16:16.706 ] 00:16:16.706 }' 00:16:16.706 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.706 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.274 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.274 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.274 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.274 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:17.274 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.274 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:17.274 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.274 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:17.274 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.274 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.274 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.274 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 59af5d30-8677-4b60-936b-526f49cd9b8d 00:16:17.274 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.274 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.533 [2024-11-26 19:05:43.915717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:17.533 [2024-11-26 19:05:43.915812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:17.533 [2024-11-26 19:05:43.915830] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:17.533 [2024-11-26 19:05:43.916174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:17.533 NewBaseBdev 00:16:17.533 [2024-11-26 19:05:43.921139] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:17.533 [2024-11-26 19:05:43.921168] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:17.533 [2024-11-26 19:05:43.921539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.533 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.533 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:17.533 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:17.533 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:17.533 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:17.533 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:17.533 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:17.533 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:17.533 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.534 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.534 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.534 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:17.534 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.534 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.534 [ 00:16:17.534 { 00:16:17.534 "name": "NewBaseBdev", 00:16:17.534 "aliases": [ 00:16:17.534 "59af5d30-8677-4b60-936b-526f49cd9b8d" 00:16:17.534 ], 00:16:17.534 "product_name": "Malloc disk", 00:16:17.534 "block_size": 512, 00:16:17.534 "num_blocks": 65536, 00:16:17.534 "uuid": "59af5d30-8677-4b60-936b-526f49cd9b8d", 00:16:17.534 "assigned_rate_limits": { 00:16:17.534 "rw_ios_per_sec": 0, 00:16:17.534 "rw_mbytes_per_sec": 0, 00:16:17.534 "r_mbytes_per_sec": 0, 00:16:17.534 "w_mbytes_per_sec": 0 00:16:17.534 }, 00:16:17.534 "claimed": true, 00:16:17.534 "claim_type": "exclusive_write", 00:16:17.534 "zoned": false, 00:16:17.534 "supported_io_types": { 00:16:17.534 "read": true, 00:16:17.534 "write": true, 00:16:17.534 "unmap": true, 00:16:17.534 "flush": true, 00:16:17.534 "reset": true, 00:16:17.534 "nvme_admin": false, 00:16:17.534 "nvme_io": false, 00:16:17.534 "nvme_io_md": false, 00:16:17.534 "write_zeroes": true, 00:16:17.534 "zcopy": true, 00:16:17.534 "get_zone_info": false, 00:16:17.534 "zone_management": false, 00:16:17.534 "zone_append": false, 00:16:17.534 "compare": false, 00:16:17.534 "compare_and_write": false, 00:16:17.534 "abort": true, 00:16:17.534 "seek_hole": false, 00:16:17.534 "seek_data": false, 00:16:17.534 "copy": true, 00:16:17.534 "nvme_iov_md": false 00:16:17.534 }, 00:16:17.534 "memory_domains": [ 00:16:17.534 { 00:16:17.534 "dma_device_id": "system", 00:16:17.534 "dma_device_type": 1 00:16:17.534 }, 00:16:17.534 { 00:16:17.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.534 "dma_device_type": 2 00:16:17.534 } 00:16:17.534 ], 00:16:17.534 "driver_specific": {} 00:16:17.534 } 00:16:17.534 ] 00:16:17.534 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.534 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:17.534 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:17.534 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.534 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.534 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.534 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.534 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:17.534 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.534 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.534 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.534 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.534 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.534 19:05:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.534 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.534 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.534 19:05:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.534 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.534 "name": "Existed_Raid", 00:16:17.534 "uuid": "7e1bdb4c-88ca-4d41-9033-c4004638df8a", 00:16:17.534 "strip_size_kb": 64, 00:16:17.534 "state": "online", 00:16:17.534 "raid_level": "raid5f", 00:16:17.534 "superblock": false, 00:16:17.534 "num_base_bdevs": 3, 00:16:17.534 "num_base_bdevs_discovered": 3, 00:16:17.534 "num_base_bdevs_operational": 3, 00:16:17.534 "base_bdevs_list": [ 00:16:17.534 { 00:16:17.534 "name": "NewBaseBdev", 00:16:17.534 "uuid": "59af5d30-8677-4b60-936b-526f49cd9b8d", 00:16:17.534 "is_configured": true, 00:16:17.534 "data_offset": 0, 00:16:17.534 "data_size": 65536 00:16:17.534 }, 00:16:17.534 { 00:16:17.534 "name": "BaseBdev2", 00:16:17.534 "uuid": "0d48129c-99ca-429d-9852-ae236c1b1b99", 00:16:17.534 "is_configured": true, 00:16:17.534 "data_offset": 0, 00:16:17.534 "data_size": 65536 00:16:17.534 }, 00:16:17.534 { 00:16:17.534 "name": "BaseBdev3", 00:16:17.534 "uuid": "b0c15033-ff88-4d43-b952-eab558a227f3", 00:16:17.534 "is_configured": true, 00:16:17.534 "data_offset": 0, 00:16:17.534 "data_size": 65536 00:16:17.534 } 00:16:17.534 ] 00:16:17.534 }' 00:16:17.534 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.534 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.101 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:18.101 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:18.101 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:18.101 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:18.101 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:18.101 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:18.101 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:18.101 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.101 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:18.101 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.101 [2024-11-26 19:05:44.535776] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:18.101 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.101 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:18.101 "name": "Existed_Raid", 00:16:18.101 "aliases": [ 00:16:18.101 "7e1bdb4c-88ca-4d41-9033-c4004638df8a" 00:16:18.101 ], 00:16:18.101 "product_name": "Raid Volume", 00:16:18.101 "block_size": 512, 00:16:18.101 "num_blocks": 131072, 00:16:18.101 "uuid": "7e1bdb4c-88ca-4d41-9033-c4004638df8a", 00:16:18.101 "assigned_rate_limits": { 00:16:18.101 "rw_ios_per_sec": 0, 00:16:18.101 "rw_mbytes_per_sec": 0, 00:16:18.101 "r_mbytes_per_sec": 0, 00:16:18.101 "w_mbytes_per_sec": 0 00:16:18.101 }, 00:16:18.101 "claimed": false, 00:16:18.101 "zoned": false, 00:16:18.101 "supported_io_types": { 00:16:18.101 "read": true, 00:16:18.101 "write": true, 00:16:18.101 "unmap": false, 00:16:18.101 "flush": false, 00:16:18.101 "reset": true, 00:16:18.101 "nvme_admin": false, 00:16:18.101 "nvme_io": false, 00:16:18.101 "nvme_io_md": false, 00:16:18.101 "write_zeroes": true, 00:16:18.101 "zcopy": false, 00:16:18.101 "get_zone_info": false, 00:16:18.101 "zone_management": false, 00:16:18.101 "zone_append": false, 00:16:18.101 "compare": false, 00:16:18.101 "compare_and_write": false, 00:16:18.101 "abort": false, 00:16:18.101 "seek_hole": false, 00:16:18.101 "seek_data": false, 00:16:18.101 "copy": false, 00:16:18.101 "nvme_iov_md": false 00:16:18.101 }, 00:16:18.101 "driver_specific": { 00:16:18.101 "raid": { 00:16:18.102 "uuid": "7e1bdb4c-88ca-4d41-9033-c4004638df8a", 00:16:18.102 "strip_size_kb": 64, 00:16:18.102 "state": "online", 00:16:18.102 "raid_level": "raid5f", 00:16:18.102 "superblock": false, 00:16:18.102 "num_base_bdevs": 3, 00:16:18.102 "num_base_bdevs_discovered": 3, 00:16:18.102 "num_base_bdevs_operational": 3, 00:16:18.102 "base_bdevs_list": [ 00:16:18.102 { 00:16:18.102 "name": "NewBaseBdev", 00:16:18.102 "uuid": "59af5d30-8677-4b60-936b-526f49cd9b8d", 00:16:18.102 "is_configured": true, 00:16:18.102 "data_offset": 0, 00:16:18.102 "data_size": 65536 00:16:18.102 }, 00:16:18.102 { 00:16:18.102 "name": "BaseBdev2", 00:16:18.102 "uuid": "0d48129c-99ca-429d-9852-ae236c1b1b99", 00:16:18.102 "is_configured": true, 00:16:18.102 "data_offset": 0, 00:16:18.102 "data_size": 65536 00:16:18.102 }, 00:16:18.102 { 00:16:18.102 "name": "BaseBdev3", 00:16:18.102 "uuid": "b0c15033-ff88-4d43-b952-eab558a227f3", 00:16:18.102 "is_configured": true, 00:16:18.102 "data_offset": 0, 00:16:18.102 "data_size": 65536 00:16:18.102 } 00:16:18.102 ] 00:16:18.102 } 00:16:18.102 } 00:16:18.102 }' 00:16:18.102 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:18.102 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:18.102 BaseBdev2 00:16:18.102 BaseBdev3' 00:16:18.102 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.102 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:18.102 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.102 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:18.102 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.102 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.102 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.360 [2024-11-26 19:05:44.879461] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:18.360 [2024-11-26 19:05:44.879730] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:18.360 [2024-11-26 19:05:44.879888] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.360 [2024-11-26 19:05:44.880344] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.360 [2024-11-26 19:05:44.880373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80734 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80734 ']' 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80734 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80734 00:16:18.360 killing process with pid 80734 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80734' 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80734 00:16:18.360 [2024-11-26 19:05:44.918602] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:18.360 19:05:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80734 00:16:18.928 [2024-11-26 19:05:45.287532] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:20.305 00:16:20.305 real 0m12.396s 00:16:20.305 user 0m20.271s 00:16:20.305 sys 0m1.693s 00:16:20.305 ************************************ 00:16:20.305 END TEST raid5f_state_function_test 00:16:20.305 ************************************ 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.305 19:05:46 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:16:20.305 19:05:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:20.305 19:05:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.305 19:05:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:20.305 ************************************ 00:16:20.305 START TEST raid5f_state_function_test_sb 00:16:20.305 ************************************ 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:20.305 Process raid pid: 81373 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81373 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81373' 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81373 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81373 ']' 00:16:20.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.305 19:05:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.305 [2024-11-26 19:05:46.662944] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:16:20.305 [2024-11-26 19:05:46.663363] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.305 [2024-11-26 19:05:46.842935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.564 [2024-11-26 19:05:46.992940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.823 [2024-11-26 19:05:47.224622] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.823 [2024-11-26 19:05:47.224683] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.081 [2024-11-26 19:05:47.626542] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:21.081 [2024-11-26 19:05:47.627250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:21.081 [2024-11-26 19:05:47.627401] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:21.081 [2024-11-26 19:05:47.627437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:21.081 [2024-11-26 19:05:47.627450] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:21.081 [2024-11-26 19:05:47.627464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.081 "name": "Existed_Raid", 00:16:21.081 "uuid": "1c3aed43-c88b-4480-9715-de2039185309", 00:16:21.081 "strip_size_kb": 64, 00:16:21.081 "state": "configuring", 00:16:21.081 "raid_level": "raid5f", 00:16:21.081 "superblock": true, 00:16:21.081 "num_base_bdevs": 3, 00:16:21.081 "num_base_bdevs_discovered": 0, 00:16:21.081 "num_base_bdevs_operational": 3, 00:16:21.081 "base_bdevs_list": [ 00:16:21.081 { 00:16:21.081 "name": "BaseBdev1", 00:16:21.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.081 "is_configured": false, 00:16:21.081 "data_offset": 0, 00:16:21.081 "data_size": 0 00:16:21.081 }, 00:16:21.081 { 00:16:21.081 "name": "BaseBdev2", 00:16:21.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.081 "is_configured": false, 00:16:21.081 "data_offset": 0, 00:16:21.081 "data_size": 0 00:16:21.081 }, 00:16:21.081 { 00:16:21.081 "name": "BaseBdev3", 00:16:21.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.081 "is_configured": false, 00:16:21.081 "data_offset": 0, 00:16:21.081 "data_size": 0 00:16:21.081 } 00:16:21.081 ] 00:16:21.081 }' 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.081 19:05:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.649 [2024-11-26 19:05:48.154620] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:21.649 [2024-11-26 19:05:48.154678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.649 [2024-11-26 19:05:48.162662] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:21.649 [2024-11-26 19:05:48.162729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:21.649 [2024-11-26 19:05:48.162795] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:21.649 [2024-11-26 19:05:48.162940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:21.649 [2024-11-26 19:05:48.162991] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:21.649 [2024-11-26 19:05:48.163138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.649 [2024-11-26 19:05:48.216356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:21.649 BaseBdev1 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.649 [ 00:16:21.649 { 00:16:21.649 "name": "BaseBdev1", 00:16:21.649 "aliases": [ 00:16:21.649 "22db7cc0-f417-4f47-8a58-69a77a9cf0cf" 00:16:21.649 ], 00:16:21.649 "product_name": "Malloc disk", 00:16:21.649 "block_size": 512, 00:16:21.649 "num_blocks": 65536, 00:16:21.649 "uuid": "22db7cc0-f417-4f47-8a58-69a77a9cf0cf", 00:16:21.649 "assigned_rate_limits": { 00:16:21.649 "rw_ios_per_sec": 0, 00:16:21.649 "rw_mbytes_per_sec": 0, 00:16:21.649 "r_mbytes_per_sec": 0, 00:16:21.649 "w_mbytes_per_sec": 0 00:16:21.649 }, 00:16:21.649 "claimed": true, 00:16:21.649 "claim_type": "exclusive_write", 00:16:21.649 "zoned": false, 00:16:21.649 "supported_io_types": { 00:16:21.649 "read": true, 00:16:21.649 "write": true, 00:16:21.649 "unmap": true, 00:16:21.649 "flush": true, 00:16:21.649 "reset": true, 00:16:21.649 "nvme_admin": false, 00:16:21.649 "nvme_io": false, 00:16:21.649 "nvme_io_md": false, 00:16:21.649 "write_zeroes": true, 00:16:21.649 "zcopy": true, 00:16:21.649 "get_zone_info": false, 00:16:21.649 "zone_management": false, 00:16:21.649 "zone_append": false, 00:16:21.649 "compare": false, 00:16:21.649 "compare_and_write": false, 00:16:21.649 "abort": true, 00:16:21.649 "seek_hole": false, 00:16:21.649 "seek_data": false, 00:16:21.649 "copy": true, 00:16:21.649 "nvme_iov_md": false 00:16:21.649 }, 00:16:21.649 "memory_domains": [ 00:16:21.649 { 00:16:21.649 "dma_device_id": "system", 00:16:21.649 "dma_device_type": 1 00:16:21.649 }, 00:16:21.649 { 00:16:21.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.649 "dma_device_type": 2 00:16:21.649 } 00:16:21.649 ], 00:16:21.649 "driver_specific": {} 00:16:21.649 } 00:16:21.649 ] 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.649 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.908 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.908 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.908 "name": "Existed_Raid", 00:16:21.908 "uuid": "b5c50a55-eb45-479f-9dc6-8f3a55fc9692", 00:16:21.908 "strip_size_kb": 64, 00:16:21.908 "state": "configuring", 00:16:21.908 "raid_level": "raid5f", 00:16:21.909 "superblock": true, 00:16:21.909 "num_base_bdevs": 3, 00:16:21.909 "num_base_bdevs_discovered": 1, 00:16:21.909 "num_base_bdevs_operational": 3, 00:16:21.909 "base_bdevs_list": [ 00:16:21.909 { 00:16:21.909 "name": "BaseBdev1", 00:16:21.909 "uuid": "22db7cc0-f417-4f47-8a58-69a77a9cf0cf", 00:16:21.909 "is_configured": true, 00:16:21.909 "data_offset": 2048, 00:16:21.909 "data_size": 63488 00:16:21.909 }, 00:16:21.909 { 00:16:21.909 "name": "BaseBdev2", 00:16:21.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.909 "is_configured": false, 00:16:21.909 "data_offset": 0, 00:16:21.909 "data_size": 0 00:16:21.909 }, 00:16:21.909 { 00:16:21.909 "name": "BaseBdev3", 00:16:21.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.909 "is_configured": false, 00:16:21.909 "data_offset": 0, 00:16:21.909 "data_size": 0 00:16:21.909 } 00:16:21.909 ] 00:16:21.909 }' 00:16:21.909 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.909 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.167 [2024-11-26 19:05:48.748571] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:22.167 [2024-11-26 19:05:48.748648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.167 [2024-11-26 19:05:48.760672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.167 [2024-11-26 19:05:48.763495] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:22.167 [2024-11-26 19:05:48.763694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:22.167 [2024-11-26 19:05:48.763812] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:22.167 [2024-11-26 19:05:48.763938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.167 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.426 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.426 "name": "Existed_Raid", 00:16:22.426 "uuid": "56b6f541-fb11-4aea-b966-91758d1b94d9", 00:16:22.426 "strip_size_kb": 64, 00:16:22.426 "state": "configuring", 00:16:22.426 "raid_level": "raid5f", 00:16:22.426 "superblock": true, 00:16:22.426 "num_base_bdevs": 3, 00:16:22.426 "num_base_bdevs_discovered": 1, 00:16:22.426 "num_base_bdevs_operational": 3, 00:16:22.426 "base_bdevs_list": [ 00:16:22.426 { 00:16:22.426 "name": "BaseBdev1", 00:16:22.426 "uuid": "22db7cc0-f417-4f47-8a58-69a77a9cf0cf", 00:16:22.426 "is_configured": true, 00:16:22.426 "data_offset": 2048, 00:16:22.426 "data_size": 63488 00:16:22.426 }, 00:16:22.426 { 00:16:22.426 "name": "BaseBdev2", 00:16:22.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.426 "is_configured": false, 00:16:22.426 "data_offset": 0, 00:16:22.426 "data_size": 0 00:16:22.426 }, 00:16:22.426 { 00:16:22.426 "name": "BaseBdev3", 00:16:22.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.426 "is_configured": false, 00:16:22.426 "data_offset": 0, 00:16:22.426 "data_size": 0 00:16:22.426 } 00:16:22.426 ] 00:16:22.426 }' 00:16:22.426 19:05:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.426 19:05:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.993 BaseBdev2 00:16:22.993 [2024-11-26 19:05:49.376542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.993 [ 00:16:22.993 { 00:16:22.993 "name": "BaseBdev2", 00:16:22.993 "aliases": [ 00:16:22.993 "5392e2fc-4ed7-406e-b775-316b522972d8" 00:16:22.993 ], 00:16:22.993 "product_name": "Malloc disk", 00:16:22.993 "block_size": 512, 00:16:22.993 "num_blocks": 65536, 00:16:22.993 "uuid": "5392e2fc-4ed7-406e-b775-316b522972d8", 00:16:22.993 "assigned_rate_limits": { 00:16:22.993 "rw_ios_per_sec": 0, 00:16:22.993 "rw_mbytes_per_sec": 0, 00:16:22.993 "r_mbytes_per_sec": 0, 00:16:22.993 "w_mbytes_per_sec": 0 00:16:22.993 }, 00:16:22.993 "claimed": true, 00:16:22.993 "claim_type": "exclusive_write", 00:16:22.993 "zoned": false, 00:16:22.993 "supported_io_types": { 00:16:22.993 "read": true, 00:16:22.993 "write": true, 00:16:22.993 "unmap": true, 00:16:22.993 "flush": true, 00:16:22.993 "reset": true, 00:16:22.993 "nvme_admin": false, 00:16:22.993 "nvme_io": false, 00:16:22.993 "nvme_io_md": false, 00:16:22.993 "write_zeroes": true, 00:16:22.993 "zcopy": true, 00:16:22.993 "get_zone_info": false, 00:16:22.993 "zone_management": false, 00:16:22.993 "zone_append": false, 00:16:22.993 "compare": false, 00:16:22.993 "compare_and_write": false, 00:16:22.993 "abort": true, 00:16:22.993 "seek_hole": false, 00:16:22.993 "seek_data": false, 00:16:22.993 "copy": true, 00:16:22.993 "nvme_iov_md": false 00:16:22.993 }, 00:16:22.993 "memory_domains": [ 00:16:22.993 { 00:16:22.993 "dma_device_id": "system", 00:16:22.993 "dma_device_type": 1 00:16:22.993 }, 00:16:22.993 { 00:16:22.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.993 "dma_device_type": 2 00:16:22.993 } 00:16:22.993 ], 00:16:22.993 "driver_specific": {} 00:16:22.993 } 00:16:22.993 ] 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.993 19:05:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.993 "name": "Existed_Raid", 00:16:22.993 "uuid": "56b6f541-fb11-4aea-b966-91758d1b94d9", 00:16:22.993 "strip_size_kb": 64, 00:16:22.993 "state": "configuring", 00:16:22.993 "raid_level": "raid5f", 00:16:22.993 "superblock": true, 00:16:22.993 "num_base_bdevs": 3, 00:16:22.993 "num_base_bdevs_discovered": 2, 00:16:22.993 "num_base_bdevs_operational": 3, 00:16:22.993 "base_bdevs_list": [ 00:16:22.993 { 00:16:22.993 "name": "BaseBdev1", 00:16:22.993 "uuid": "22db7cc0-f417-4f47-8a58-69a77a9cf0cf", 00:16:22.993 "is_configured": true, 00:16:22.993 "data_offset": 2048, 00:16:22.993 "data_size": 63488 00:16:22.993 }, 00:16:22.993 { 00:16:22.993 "name": "BaseBdev2", 00:16:22.993 "uuid": "5392e2fc-4ed7-406e-b775-316b522972d8", 00:16:22.993 "is_configured": true, 00:16:22.993 "data_offset": 2048, 00:16:22.993 "data_size": 63488 00:16:22.993 }, 00:16:22.993 { 00:16:22.993 "name": "BaseBdev3", 00:16:22.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.993 "is_configured": false, 00:16:22.993 "data_offset": 0, 00:16:22.993 "data_size": 0 00:16:22.993 } 00:16:22.994 ] 00:16:22.994 }' 00:16:22.994 19:05:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.994 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.561 19:05:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:23.561 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.561 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.561 [2024-11-26 19:05:49.993866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:23.561 [2024-11-26 19:05:49.994240] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:23.561 [2024-11-26 19:05:49.994271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:23.561 BaseBdev3 00:16:23.561 [2024-11-26 19:05:49.994680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:23.561 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.561 19:05:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:23.561 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:23.561 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:23.561 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:23.561 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:23.561 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:23.561 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:23.561 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.561 19:05:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.561 [2024-11-26 19:05:50.000218] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:23.561 [2024-11-26 19:05:50.000412] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:23.561 [2024-11-26 19:05:50.000898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.561 [ 00:16:23.561 { 00:16:23.561 "name": "BaseBdev3", 00:16:23.561 "aliases": [ 00:16:23.561 "ed50f99e-be8d-4264-8036-e07e0ba6dd62" 00:16:23.561 ], 00:16:23.561 "product_name": "Malloc disk", 00:16:23.561 "block_size": 512, 00:16:23.561 "num_blocks": 65536, 00:16:23.561 "uuid": "ed50f99e-be8d-4264-8036-e07e0ba6dd62", 00:16:23.561 "assigned_rate_limits": { 00:16:23.561 "rw_ios_per_sec": 0, 00:16:23.561 "rw_mbytes_per_sec": 0, 00:16:23.561 "r_mbytes_per_sec": 0, 00:16:23.561 "w_mbytes_per_sec": 0 00:16:23.561 }, 00:16:23.561 "claimed": true, 00:16:23.561 "claim_type": "exclusive_write", 00:16:23.561 "zoned": false, 00:16:23.561 "supported_io_types": { 00:16:23.561 "read": true, 00:16:23.561 "write": true, 00:16:23.561 "unmap": true, 00:16:23.561 "flush": true, 00:16:23.561 "reset": true, 00:16:23.561 "nvme_admin": false, 00:16:23.561 "nvme_io": false, 00:16:23.561 "nvme_io_md": false, 00:16:23.561 "write_zeroes": true, 00:16:23.561 "zcopy": true, 00:16:23.561 "get_zone_info": false, 00:16:23.561 "zone_management": false, 00:16:23.561 "zone_append": false, 00:16:23.561 "compare": false, 00:16:23.561 "compare_and_write": false, 00:16:23.561 "abort": true, 00:16:23.561 "seek_hole": false, 00:16:23.561 "seek_data": false, 00:16:23.561 "copy": true, 00:16:23.561 "nvme_iov_md": false 00:16:23.561 }, 00:16:23.561 "memory_domains": [ 00:16:23.561 { 00:16:23.561 "dma_device_id": "system", 00:16:23.561 "dma_device_type": 1 00:16:23.561 }, 00:16:23.561 { 00:16:23.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.561 "dma_device_type": 2 00:16:23.561 } 00:16:23.561 ], 00:16:23.561 "driver_specific": {} 00:16:23.561 } 00:16:23.561 ] 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.561 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.561 "name": "Existed_Raid", 00:16:23.561 "uuid": "56b6f541-fb11-4aea-b966-91758d1b94d9", 00:16:23.561 "strip_size_kb": 64, 00:16:23.561 "state": "online", 00:16:23.561 "raid_level": "raid5f", 00:16:23.561 "superblock": true, 00:16:23.561 "num_base_bdevs": 3, 00:16:23.561 "num_base_bdevs_discovered": 3, 00:16:23.561 "num_base_bdevs_operational": 3, 00:16:23.561 "base_bdevs_list": [ 00:16:23.561 { 00:16:23.561 "name": "BaseBdev1", 00:16:23.561 "uuid": "22db7cc0-f417-4f47-8a58-69a77a9cf0cf", 00:16:23.561 "is_configured": true, 00:16:23.562 "data_offset": 2048, 00:16:23.562 "data_size": 63488 00:16:23.562 }, 00:16:23.562 { 00:16:23.562 "name": "BaseBdev2", 00:16:23.562 "uuid": "5392e2fc-4ed7-406e-b775-316b522972d8", 00:16:23.562 "is_configured": true, 00:16:23.562 "data_offset": 2048, 00:16:23.562 "data_size": 63488 00:16:23.562 }, 00:16:23.562 { 00:16:23.562 "name": "BaseBdev3", 00:16:23.562 "uuid": "ed50f99e-be8d-4264-8036-e07e0ba6dd62", 00:16:23.562 "is_configured": true, 00:16:23.562 "data_offset": 2048, 00:16:23.562 "data_size": 63488 00:16:23.562 } 00:16:23.562 ] 00:16:23.562 }' 00:16:23.562 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.562 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.128 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:24.128 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:24.128 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:24.128 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:24.128 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.129 [2024-11-26 19:05:50.555648] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:24.129 "name": "Existed_Raid", 00:16:24.129 "aliases": [ 00:16:24.129 "56b6f541-fb11-4aea-b966-91758d1b94d9" 00:16:24.129 ], 00:16:24.129 "product_name": "Raid Volume", 00:16:24.129 "block_size": 512, 00:16:24.129 "num_blocks": 126976, 00:16:24.129 "uuid": "56b6f541-fb11-4aea-b966-91758d1b94d9", 00:16:24.129 "assigned_rate_limits": { 00:16:24.129 "rw_ios_per_sec": 0, 00:16:24.129 "rw_mbytes_per_sec": 0, 00:16:24.129 "r_mbytes_per_sec": 0, 00:16:24.129 "w_mbytes_per_sec": 0 00:16:24.129 }, 00:16:24.129 "claimed": false, 00:16:24.129 "zoned": false, 00:16:24.129 "supported_io_types": { 00:16:24.129 "read": true, 00:16:24.129 "write": true, 00:16:24.129 "unmap": false, 00:16:24.129 "flush": false, 00:16:24.129 "reset": true, 00:16:24.129 "nvme_admin": false, 00:16:24.129 "nvme_io": false, 00:16:24.129 "nvme_io_md": false, 00:16:24.129 "write_zeroes": true, 00:16:24.129 "zcopy": false, 00:16:24.129 "get_zone_info": false, 00:16:24.129 "zone_management": false, 00:16:24.129 "zone_append": false, 00:16:24.129 "compare": false, 00:16:24.129 "compare_and_write": false, 00:16:24.129 "abort": false, 00:16:24.129 "seek_hole": false, 00:16:24.129 "seek_data": false, 00:16:24.129 "copy": false, 00:16:24.129 "nvme_iov_md": false 00:16:24.129 }, 00:16:24.129 "driver_specific": { 00:16:24.129 "raid": { 00:16:24.129 "uuid": "56b6f541-fb11-4aea-b966-91758d1b94d9", 00:16:24.129 "strip_size_kb": 64, 00:16:24.129 "state": "online", 00:16:24.129 "raid_level": "raid5f", 00:16:24.129 "superblock": true, 00:16:24.129 "num_base_bdevs": 3, 00:16:24.129 "num_base_bdevs_discovered": 3, 00:16:24.129 "num_base_bdevs_operational": 3, 00:16:24.129 "base_bdevs_list": [ 00:16:24.129 { 00:16:24.129 "name": "BaseBdev1", 00:16:24.129 "uuid": "22db7cc0-f417-4f47-8a58-69a77a9cf0cf", 00:16:24.129 "is_configured": true, 00:16:24.129 "data_offset": 2048, 00:16:24.129 "data_size": 63488 00:16:24.129 }, 00:16:24.129 { 00:16:24.129 "name": "BaseBdev2", 00:16:24.129 "uuid": "5392e2fc-4ed7-406e-b775-316b522972d8", 00:16:24.129 "is_configured": true, 00:16:24.129 "data_offset": 2048, 00:16:24.129 "data_size": 63488 00:16:24.129 }, 00:16:24.129 { 00:16:24.129 "name": "BaseBdev3", 00:16:24.129 "uuid": "ed50f99e-be8d-4264-8036-e07e0ba6dd62", 00:16:24.129 "is_configured": true, 00:16:24.129 "data_offset": 2048, 00:16:24.129 "data_size": 63488 00:16:24.129 } 00:16:24.129 ] 00:16:24.129 } 00:16:24.129 } 00:16:24.129 }' 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:24.129 BaseBdev2 00:16:24.129 BaseBdev3' 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.129 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.388 [2024-11-26 19:05:50.839532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.388 "name": "Existed_Raid", 00:16:24.388 "uuid": "56b6f541-fb11-4aea-b966-91758d1b94d9", 00:16:24.388 "strip_size_kb": 64, 00:16:24.388 "state": "online", 00:16:24.388 "raid_level": "raid5f", 00:16:24.388 "superblock": true, 00:16:24.388 "num_base_bdevs": 3, 00:16:24.388 "num_base_bdevs_discovered": 2, 00:16:24.388 "num_base_bdevs_operational": 2, 00:16:24.388 "base_bdevs_list": [ 00:16:24.388 { 00:16:24.388 "name": null, 00:16:24.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.388 "is_configured": false, 00:16:24.388 "data_offset": 0, 00:16:24.388 "data_size": 63488 00:16:24.388 }, 00:16:24.388 { 00:16:24.388 "name": "BaseBdev2", 00:16:24.388 "uuid": "5392e2fc-4ed7-406e-b775-316b522972d8", 00:16:24.388 "is_configured": true, 00:16:24.388 "data_offset": 2048, 00:16:24.388 "data_size": 63488 00:16:24.388 }, 00:16:24.388 { 00:16:24.388 "name": "BaseBdev3", 00:16:24.388 "uuid": "ed50f99e-be8d-4264-8036-e07e0ba6dd62", 00:16:24.388 "is_configured": true, 00:16:24.388 "data_offset": 2048, 00:16:24.388 "data_size": 63488 00:16:24.388 } 00:16:24.388 ] 00:16:24.388 }' 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.388 19:05:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.956 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:24.956 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:24.956 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.956 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.956 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.956 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:24.956 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.956 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:24.956 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:24.956 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:24.956 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.956 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.956 [2024-11-26 19:05:51.466112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:24.956 [2024-11-26 19:05:51.466555] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:24.956 [2024-11-26 19:05:51.560749] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.956 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.956 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:24.956 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:24.956 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.956 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.956 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:24.956 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.215 [2024-11-26 19:05:51.620868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:25.215 [2024-11-26 19:05:51.621104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.215 BaseBdev2 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.215 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.474 [ 00:16:25.474 { 00:16:25.474 "name": "BaseBdev2", 00:16:25.474 "aliases": [ 00:16:25.474 "a066bece-767b-49cb-89fa-b9a8b8650da4" 00:16:25.474 ], 00:16:25.474 "product_name": "Malloc disk", 00:16:25.474 "block_size": 512, 00:16:25.474 "num_blocks": 65536, 00:16:25.474 "uuid": "a066bece-767b-49cb-89fa-b9a8b8650da4", 00:16:25.474 "assigned_rate_limits": { 00:16:25.474 "rw_ios_per_sec": 0, 00:16:25.474 "rw_mbytes_per_sec": 0, 00:16:25.474 "r_mbytes_per_sec": 0, 00:16:25.474 "w_mbytes_per_sec": 0 00:16:25.474 }, 00:16:25.474 "claimed": false, 00:16:25.474 "zoned": false, 00:16:25.474 "supported_io_types": { 00:16:25.474 "read": true, 00:16:25.474 "write": true, 00:16:25.474 "unmap": true, 00:16:25.474 "flush": true, 00:16:25.474 "reset": true, 00:16:25.474 "nvme_admin": false, 00:16:25.474 "nvme_io": false, 00:16:25.474 "nvme_io_md": false, 00:16:25.474 "write_zeroes": true, 00:16:25.474 "zcopy": true, 00:16:25.474 "get_zone_info": false, 00:16:25.474 "zone_management": false, 00:16:25.474 "zone_append": false, 00:16:25.474 "compare": false, 00:16:25.474 "compare_and_write": false, 00:16:25.474 "abort": true, 00:16:25.474 "seek_hole": false, 00:16:25.474 "seek_data": false, 00:16:25.474 "copy": true, 00:16:25.474 "nvme_iov_md": false 00:16:25.474 }, 00:16:25.474 "memory_domains": [ 00:16:25.474 { 00:16:25.474 "dma_device_id": "system", 00:16:25.474 "dma_device_type": 1 00:16:25.474 }, 00:16:25.474 { 00:16:25.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.474 "dma_device_type": 2 00:16:25.475 } 00:16:25.475 ], 00:16:25.475 "driver_specific": {} 00:16:25.475 } 00:16:25.475 ] 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.475 BaseBdev3 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.475 [ 00:16:25.475 { 00:16:25.475 "name": "BaseBdev3", 00:16:25.475 "aliases": [ 00:16:25.475 "572f2045-b16f-434e-891c-363cc6236e36" 00:16:25.475 ], 00:16:25.475 "product_name": "Malloc disk", 00:16:25.475 "block_size": 512, 00:16:25.475 "num_blocks": 65536, 00:16:25.475 "uuid": "572f2045-b16f-434e-891c-363cc6236e36", 00:16:25.475 "assigned_rate_limits": { 00:16:25.475 "rw_ios_per_sec": 0, 00:16:25.475 "rw_mbytes_per_sec": 0, 00:16:25.475 "r_mbytes_per_sec": 0, 00:16:25.475 "w_mbytes_per_sec": 0 00:16:25.475 }, 00:16:25.475 "claimed": false, 00:16:25.475 "zoned": false, 00:16:25.475 "supported_io_types": { 00:16:25.475 "read": true, 00:16:25.475 "write": true, 00:16:25.475 "unmap": true, 00:16:25.475 "flush": true, 00:16:25.475 "reset": true, 00:16:25.475 "nvme_admin": false, 00:16:25.475 "nvme_io": false, 00:16:25.475 "nvme_io_md": false, 00:16:25.475 "write_zeroes": true, 00:16:25.475 "zcopy": true, 00:16:25.475 "get_zone_info": false, 00:16:25.475 "zone_management": false, 00:16:25.475 "zone_append": false, 00:16:25.475 "compare": false, 00:16:25.475 "compare_and_write": false, 00:16:25.475 "abort": true, 00:16:25.475 "seek_hole": false, 00:16:25.475 "seek_data": false, 00:16:25.475 "copy": true, 00:16:25.475 "nvme_iov_md": false 00:16:25.475 }, 00:16:25.475 "memory_domains": [ 00:16:25.475 { 00:16:25.475 "dma_device_id": "system", 00:16:25.475 "dma_device_type": 1 00:16:25.475 }, 00:16:25.475 { 00:16:25.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.475 "dma_device_type": 2 00:16:25.475 } 00:16:25.475 ], 00:16:25.475 "driver_specific": {} 00:16:25.475 } 00:16:25.475 ] 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.475 [2024-11-26 19:05:51.924167] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:25.475 [2024-11-26 19:05:51.924375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:25.475 [2024-11-26 19:05:51.924524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:25.475 [2024-11-26 19:05:51.927171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.475 "name": "Existed_Raid", 00:16:25.475 "uuid": "8300f498-29a1-496c-8c53-8353e53586ce", 00:16:25.475 "strip_size_kb": 64, 00:16:25.475 "state": "configuring", 00:16:25.475 "raid_level": "raid5f", 00:16:25.475 "superblock": true, 00:16:25.475 "num_base_bdevs": 3, 00:16:25.475 "num_base_bdevs_discovered": 2, 00:16:25.475 "num_base_bdevs_operational": 3, 00:16:25.475 "base_bdevs_list": [ 00:16:25.475 { 00:16:25.475 "name": "BaseBdev1", 00:16:25.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.475 "is_configured": false, 00:16:25.475 "data_offset": 0, 00:16:25.475 "data_size": 0 00:16:25.475 }, 00:16:25.475 { 00:16:25.475 "name": "BaseBdev2", 00:16:25.475 "uuid": "a066bece-767b-49cb-89fa-b9a8b8650da4", 00:16:25.475 "is_configured": true, 00:16:25.475 "data_offset": 2048, 00:16:25.475 "data_size": 63488 00:16:25.475 }, 00:16:25.475 { 00:16:25.475 "name": "BaseBdev3", 00:16:25.475 "uuid": "572f2045-b16f-434e-891c-363cc6236e36", 00:16:25.475 "is_configured": true, 00:16:25.475 "data_offset": 2048, 00:16:25.475 "data_size": 63488 00:16:25.475 } 00:16:25.475 ] 00:16:25.475 }' 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.475 19:05:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.042 19:05:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:26.042 19:05:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.042 19:05:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.042 [2024-11-26 19:05:52.444322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:26.042 19:05:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.042 19:05:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:26.042 19:05:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.042 19:05:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.042 19:05:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.042 19:05:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.042 19:05:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.042 19:05:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.042 19:05:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.042 19:05:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.042 19:05:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.042 19:05:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.042 19:05:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.042 19:05:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.043 19:05:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.043 19:05:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.043 19:05:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.043 "name": "Existed_Raid", 00:16:26.043 "uuid": "8300f498-29a1-496c-8c53-8353e53586ce", 00:16:26.043 "strip_size_kb": 64, 00:16:26.043 "state": "configuring", 00:16:26.043 "raid_level": "raid5f", 00:16:26.043 "superblock": true, 00:16:26.043 "num_base_bdevs": 3, 00:16:26.043 "num_base_bdevs_discovered": 1, 00:16:26.043 "num_base_bdevs_operational": 3, 00:16:26.043 "base_bdevs_list": [ 00:16:26.043 { 00:16:26.043 "name": "BaseBdev1", 00:16:26.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.043 "is_configured": false, 00:16:26.043 "data_offset": 0, 00:16:26.043 "data_size": 0 00:16:26.043 }, 00:16:26.043 { 00:16:26.043 "name": null, 00:16:26.043 "uuid": "a066bece-767b-49cb-89fa-b9a8b8650da4", 00:16:26.043 "is_configured": false, 00:16:26.043 "data_offset": 0, 00:16:26.043 "data_size": 63488 00:16:26.043 }, 00:16:26.043 { 00:16:26.043 "name": "BaseBdev3", 00:16:26.043 "uuid": "572f2045-b16f-434e-891c-363cc6236e36", 00:16:26.043 "is_configured": true, 00:16:26.043 "data_offset": 2048, 00:16:26.043 "data_size": 63488 00:16:26.043 } 00:16:26.043 ] 00:16:26.043 }' 00:16:26.043 19:05:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.043 19:05:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.611 19:05:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:26.611 19:05:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.611 19:05:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.611 19:05:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.611 19:05:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.611 [2024-11-26 19:05:53.049854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.611 BaseBdev1 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.611 [ 00:16:26.611 { 00:16:26.611 "name": "BaseBdev1", 00:16:26.611 "aliases": [ 00:16:26.611 "d6e748d2-dc1b-4513-9b20-0fb5c54d930d" 00:16:26.611 ], 00:16:26.611 "product_name": "Malloc disk", 00:16:26.611 "block_size": 512, 00:16:26.611 "num_blocks": 65536, 00:16:26.611 "uuid": "d6e748d2-dc1b-4513-9b20-0fb5c54d930d", 00:16:26.611 "assigned_rate_limits": { 00:16:26.611 "rw_ios_per_sec": 0, 00:16:26.611 "rw_mbytes_per_sec": 0, 00:16:26.611 "r_mbytes_per_sec": 0, 00:16:26.611 "w_mbytes_per_sec": 0 00:16:26.611 }, 00:16:26.611 "claimed": true, 00:16:26.611 "claim_type": "exclusive_write", 00:16:26.611 "zoned": false, 00:16:26.611 "supported_io_types": { 00:16:26.611 "read": true, 00:16:26.611 "write": true, 00:16:26.611 "unmap": true, 00:16:26.611 "flush": true, 00:16:26.611 "reset": true, 00:16:26.611 "nvme_admin": false, 00:16:26.611 "nvme_io": false, 00:16:26.611 "nvme_io_md": false, 00:16:26.611 "write_zeroes": true, 00:16:26.611 "zcopy": true, 00:16:26.611 "get_zone_info": false, 00:16:26.611 "zone_management": false, 00:16:26.611 "zone_append": false, 00:16:26.611 "compare": false, 00:16:26.611 "compare_and_write": false, 00:16:26.611 "abort": true, 00:16:26.611 "seek_hole": false, 00:16:26.611 "seek_data": false, 00:16:26.611 "copy": true, 00:16:26.611 "nvme_iov_md": false 00:16:26.611 }, 00:16:26.611 "memory_domains": [ 00:16:26.611 { 00:16:26.611 "dma_device_id": "system", 00:16:26.611 "dma_device_type": 1 00:16:26.611 }, 00:16:26.611 { 00:16:26.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.611 "dma_device_type": 2 00:16:26.611 } 00:16:26.611 ], 00:16:26.611 "driver_specific": {} 00:16:26.611 } 00:16:26.611 ] 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.611 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.611 "name": "Existed_Raid", 00:16:26.611 "uuid": "8300f498-29a1-496c-8c53-8353e53586ce", 00:16:26.611 "strip_size_kb": 64, 00:16:26.611 "state": "configuring", 00:16:26.611 "raid_level": "raid5f", 00:16:26.611 "superblock": true, 00:16:26.611 "num_base_bdevs": 3, 00:16:26.611 "num_base_bdevs_discovered": 2, 00:16:26.611 "num_base_bdevs_operational": 3, 00:16:26.611 "base_bdevs_list": [ 00:16:26.611 { 00:16:26.611 "name": "BaseBdev1", 00:16:26.611 "uuid": "d6e748d2-dc1b-4513-9b20-0fb5c54d930d", 00:16:26.611 "is_configured": true, 00:16:26.611 "data_offset": 2048, 00:16:26.611 "data_size": 63488 00:16:26.611 }, 00:16:26.611 { 00:16:26.611 "name": null, 00:16:26.611 "uuid": "a066bece-767b-49cb-89fa-b9a8b8650da4", 00:16:26.611 "is_configured": false, 00:16:26.611 "data_offset": 0, 00:16:26.612 "data_size": 63488 00:16:26.612 }, 00:16:26.612 { 00:16:26.612 "name": "BaseBdev3", 00:16:26.612 "uuid": "572f2045-b16f-434e-891c-363cc6236e36", 00:16:26.612 "is_configured": true, 00:16:26.612 "data_offset": 2048, 00:16:26.612 "data_size": 63488 00:16:26.612 } 00:16:26.612 ] 00:16:26.612 }' 00:16:26.612 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.612 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.204 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.204 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.204 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.204 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:27.204 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.204 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:27.204 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:27.204 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.204 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.204 [2024-11-26 19:05:53.642065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:27.205 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.205 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:27.205 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.205 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.205 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.205 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.205 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.205 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.205 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.205 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.205 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.205 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.205 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.205 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.205 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.205 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.205 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.205 "name": "Existed_Raid", 00:16:27.205 "uuid": "8300f498-29a1-496c-8c53-8353e53586ce", 00:16:27.205 "strip_size_kb": 64, 00:16:27.205 "state": "configuring", 00:16:27.205 "raid_level": "raid5f", 00:16:27.205 "superblock": true, 00:16:27.205 "num_base_bdevs": 3, 00:16:27.205 "num_base_bdevs_discovered": 1, 00:16:27.205 "num_base_bdevs_operational": 3, 00:16:27.205 "base_bdevs_list": [ 00:16:27.205 { 00:16:27.205 "name": "BaseBdev1", 00:16:27.205 "uuid": "d6e748d2-dc1b-4513-9b20-0fb5c54d930d", 00:16:27.205 "is_configured": true, 00:16:27.205 "data_offset": 2048, 00:16:27.205 "data_size": 63488 00:16:27.205 }, 00:16:27.205 { 00:16:27.205 "name": null, 00:16:27.205 "uuid": "a066bece-767b-49cb-89fa-b9a8b8650da4", 00:16:27.205 "is_configured": false, 00:16:27.205 "data_offset": 0, 00:16:27.205 "data_size": 63488 00:16:27.205 }, 00:16:27.205 { 00:16:27.205 "name": null, 00:16:27.205 "uuid": "572f2045-b16f-434e-891c-363cc6236e36", 00:16:27.205 "is_configured": false, 00:16:27.205 "data_offset": 0, 00:16:27.205 "data_size": 63488 00:16:27.205 } 00:16:27.205 ] 00:16:27.205 }' 00:16:27.205 19:05:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.205 19:05:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.772 [2024-11-26 19:05:54.214288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.772 "name": "Existed_Raid", 00:16:27.772 "uuid": "8300f498-29a1-496c-8c53-8353e53586ce", 00:16:27.772 "strip_size_kb": 64, 00:16:27.772 "state": "configuring", 00:16:27.772 "raid_level": "raid5f", 00:16:27.772 "superblock": true, 00:16:27.772 "num_base_bdevs": 3, 00:16:27.772 "num_base_bdevs_discovered": 2, 00:16:27.772 "num_base_bdevs_operational": 3, 00:16:27.772 "base_bdevs_list": [ 00:16:27.772 { 00:16:27.772 "name": "BaseBdev1", 00:16:27.772 "uuid": "d6e748d2-dc1b-4513-9b20-0fb5c54d930d", 00:16:27.772 "is_configured": true, 00:16:27.772 "data_offset": 2048, 00:16:27.772 "data_size": 63488 00:16:27.772 }, 00:16:27.772 { 00:16:27.772 "name": null, 00:16:27.772 "uuid": "a066bece-767b-49cb-89fa-b9a8b8650da4", 00:16:27.772 "is_configured": false, 00:16:27.772 "data_offset": 0, 00:16:27.772 "data_size": 63488 00:16:27.772 }, 00:16:27.772 { 00:16:27.772 "name": "BaseBdev3", 00:16:27.772 "uuid": "572f2045-b16f-434e-891c-363cc6236e36", 00:16:27.772 "is_configured": true, 00:16:27.772 "data_offset": 2048, 00:16:27.772 "data_size": 63488 00:16:27.772 } 00:16:27.772 ] 00:16:27.772 }' 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.772 19:05:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.340 [2024-11-26 19:05:54.822494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.340 19:05:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.598 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.598 "name": "Existed_Raid", 00:16:28.598 "uuid": "8300f498-29a1-496c-8c53-8353e53586ce", 00:16:28.598 "strip_size_kb": 64, 00:16:28.598 "state": "configuring", 00:16:28.598 "raid_level": "raid5f", 00:16:28.598 "superblock": true, 00:16:28.598 "num_base_bdevs": 3, 00:16:28.598 "num_base_bdevs_discovered": 1, 00:16:28.598 "num_base_bdevs_operational": 3, 00:16:28.598 "base_bdevs_list": [ 00:16:28.598 { 00:16:28.598 "name": null, 00:16:28.598 "uuid": "d6e748d2-dc1b-4513-9b20-0fb5c54d930d", 00:16:28.598 "is_configured": false, 00:16:28.598 "data_offset": 0, 00:16:28.598 "data_size": 63488 00:16:28.598 }, 00:16:28.598 { 00:16:28.598 "name": null, 00:16:28.598 "uuid": "a066bece-767b-49cb-89fa-b9a8b8650da4", 00:16:28.598 "is_configured": false, 00:16:28.598 "data_offset": 0, 00:16:28.598 "data_size": 63488 00:16:28.598 }, 00:16:28.598 { 00:16:28.598 "name": "BaseBdev3", 00:16:28.598 "uuid": "572f2045-b16f-434e-891c-363cc6236e36", 00:16:28.598 "is_configured": true, 00:16:28.598 "data_offset": 2048, 00:16:28.598 "data_size": 63488 00:16:28.598 } 00:16:28.598 ] 00:16:28.598 }' 00:16:28.598 19:05:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.598 19:05:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.856 [2024-11-26 19:05:55.433809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.856 19:05:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.114 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.114 "name": "Existed_Raid", 00:16:29.114 "uuid": "8300f498-29a1-496c-8c53-8353e53586ce", 00:16:29.114 "strip_size_kb": 64, 00:16:29.114 "state": "configuring", 00:16:29.114 "raid_level": "raid5f", 00:16:29.114 "superblock": true, 00:16:29.114 "num_base_bdevs": 3, 00:16:29.114 "num_base_bdevs_discovered": 2, 00:16:29.114 "num_base_bdevs_operational": 3, 00:16:29.114 "base_bdevs_list": [ 00:16:29.114 { 00:16:29.114 "name": null, 00:16:29.114 "uuid": "d6e748d2-dc1b-4513-9b20-0fb5c54d930d", 00:16:29.114 "is_configured": false, 00:16:29.114 "data_offset": 0, 00:16:29.114 "data_size": 63488 00:16:29.114 }, 00:16:29.114 { 00:16:29.114 "name": "BaseBdev2", 00:16:29.114 "uuid": "a066bece-767b-49cb-89fa-b9a8b8650da4", 00:16:29.114 "is_configured": true, 00:16:29.114 "data_offset": 2048, 00:16:29.114 "data_size": 63488 00:16:29.114 }, 00:16:29.114 { 00:16:29.114 "name": "BaseBdev3", 00:16:29.114 "uuid": "572f2045-b16f-434e-891c-363cc6236e36", 00:16:29.114 "is_configured": true, 00:16:29.114 "data_offset": 2048, 00:16:29.114 "data_size": 63488 00:16:29.114 } 00:16:29.114 ] 00:16:29.114 }' 00:16:29.114 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.114 19:05:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.373 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.373 19:05:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.373 19:05:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.373 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:29.373 19:05:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.373 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:29.373 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.373 19:05:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.373 19:05:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.373 19:05:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:29.373 19:05:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d6e748d2-dc1b-4513-9b20-0fb5c54d930d 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.632 NewBaseBdev 00:16:29.632 [2024-11-26 19:05:56.065373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:29.632 [2024-11-26 19:05:56.065693] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:29.632 [2024-11-26 19:05:56.065719] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:29.632 [2024-11-26 19:05:56.066042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.632 [2024-11-26 19:05:56.071104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:29.632 [2024-11-26 19:05:56.071267] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:29.632 [2024-11-26 19:05:56.071765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.632 [ 00:16:29.632 { 00:16:29.632 "name": "NewBaseBdev", 00:16:29.632 "aliases": [ 00:16:29.632 "d6e748d2-dc1b-4513-9b20-0fb5c54d930d" 00:16:29.632 ], 00:16:29.632 "product_name": "Malloc disk", 00:16:29.632 "block_size": 512, 00:16:29.632 "num_blocks": 65536, 00:16:29.632 "uuid": "d6e748d2-dc1b-4513-9b20-0fb5c54d930d", 00:16:29.632 "assigned_rate_limits": { 00:16:29.632 "rw_ios_per_sec": 0, 00:16:29.632 "rw_mbytes_per_sec": 0, 00:16:29.632 "r_mbytes_per_sec": 0, 00:16:29.632 "w_mbytes_per_sec": 0 00:16:29.632 }, 00:16:29.632 "claimed": true, 00:16:29.632 "claim_type": "exclusive_write", 00:16:29.632 "zoned": false, 00:16:29.632 "supported_io_types": { 00:16:29.632 "read": true, 00:16:29.632 "write": true, 00:16:29.632 "unmap": true, 00:16:29.632 "flush": true, 00:16:29.632 "reset": true, 00:16:29.632 "nvme_admin": false, 00:16:29.632 "nvme_io": false, 00:16:29.632 "nvme_io_md": false, 00:16:29.632 "write_zeroes": true, 00:16:29.632 "zcopy": true, 00:16:29.632 "get_zone_info": false, 00:16:29.632 "zone_management": false, 00:16:29.632 "zone_append": false, 00:16:29.632 "compare": false, 00:16:29.632 "compare_and_write": false, 00:16:29.632 "abort": true, 00:16:29.632 "seek_hole": false, 00:16:29.632 "seek_data": false, 00:16:29.632 "copy": true, 00:16:29.632 "nvme_iov_md": false 00:16:29.632 }, 00:16:29.632 "memory_domains": [ 00:16:29.632 { 00:16:29.632 "dma_device_id": "system", 00:16:29.632 "dma_device_type": 1 00:16:29.632 }, 00:16:29.632 { 00:16:29.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.632 "dma_device_type": 2 00:16:29.632 } 00:16:29.632 ], 00:16:29.632 "driver_specific": {} 00:16:29.632 } 00:16:29.632 ] 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.632 "name": "Existed_Raid", 00:16:29.632 "uuid": "8300f498-29a1-496c-8c53-8353e53586ce", 00:16:29.632 "strip_size_kb": 64, 00:16:29.632 "state": "online", 00:16:29.632 "raid_level": "raid5f", 00:16:29.632 "superblock": true, 00:16:29.632 "num_base_bdevs": 3, 00:16:29.632 "num_base_bdevs_discovered": 3, 00:16:29.632 "num_base_bdevs_operational": 3, 00:16:29.632 "base_bdevs_list": [ 00:16:29.632 { 00:16:29.632 "name": "NewBaseBdev", 00:16:29.632 "uuid": "d6e748d2-dc1b-4513-9b20-0fb5c54d930d", 00:16:29.632 "is_configured": true, 00:16:29.632 "data_offset": 2048, 00:16:29.632 "data_size": 63488 00:16:29.632 }, 00:16:29.632 { 00:16:29.632 "name": "BaseBdev2", 00:16:29.632 "uuid": "a066bece-767b-49cb-89fa-b9a8b8650da4", 00:16:29.632 "is_configured": true, 00:16:29.632 "data_offset": 2048, 00:16:29.632 "data_size": 63488 00:16:29.632 }, 00:16:29.632 { 00:16:29.632 "name": "BaseBdev3", 00:16:29.632 "uuid": "572f2045-b16f-434e-891c-363cc6236e36", 00:16:29.632 "is_configured": true, 00:16:29.632 "data_offset": 2048, 00:16:29.632 "data_size": 63488 00:16:29.632 } 00:16:29.632 ] 00:16:29.632 }' 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.632 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.200 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:30.200 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:30.200 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:30.200 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:30.200 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:30.200 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:30.200 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:30.200 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:30.200 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.200 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.200 [2024-11-26 19:05:56.646648] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.200 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.200 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:30.200 "name": "Existed_Raid", 00:16:30.200 "aliases": [ 00:16:30.200 "8300f498-29a1-496c-8c53-8353e53586ce" 00:16:30.200 ], 00:16:30.200 "product_name": "Raid Volume", 00:16:30.200 "block_size": 512, 00:16:30.200 "num_blocks": 126976, 00:16:30.200 "uuid": "8300f498-29a1-496c-8c53-8353e53586ce", 00:16:30.200 "assigned_rate_limits": { 00:16:30.200 "rw_ios_per_sec": 0, 00:16:30.200 "rw_mbytes_per_sec": 0, 00:16:30.200 "r_mbytes_per_sec": 0, 00:16:30.200 "w_mbytes_per_sec": 0 00:16:30.200 }, 00:16:30.200 "claimed": false, 00:16:30.200 "zoned": false, 00:16:30.200 "supported_io_types": { 00:16:30.200 "read": true, 00:16:30.200 "write": true, 00:16:30.200 "unmap": false, 00:16:30.200 "flush": false, 00:16:30.200 "reset": true, 00:16:30.200 "nvme_admin": false, 00:16:30.200 "nvme_io": false, 00:16:30.200 "nvme_io_md": false, 00:16:30.200 "write_zeroes": true, 00:16:30.200 "zcopy": false, 00:16:30.200 "get_zone_info": false, 00:16:30.200 "zone_management": false, 00:16:30.200 "zone_append": false, 00:16:30.200 "compare": false, 00:16:30.200 "compare_and_write": false, 00:16:30.200 "abort": false, 00:16:30.200 "seek_hole": false, 00:16:30.200 "seek_data": false, 00:16:30.200 "copy": false, 00:16:30.200 "nvme_iov_md": false 00:16:30.200 }, 00:16:30.200 "driver_specific": { 00:16:30.200 "raid": { 00:16:30.200 "uuid": "8300f498-29a1-496c-8c53-8353e53586ce", 00:16:30.200 "strip_size_kb": 64, 00:16:30.200 "state": "online", 00:16:30.200 "raid_level": "raid5f", 00:16:30.200 "superblock": true, 00:16:30.200 "num_base_bdevs": 3, 00:16:30.200 "num_base_bdevs_discovered": 3, 00:16:30.200 "num_base_bdevs_operational": 3, 00:16:30.200 "base_bdevs_list": [ 00:16:30.200 { 00:16:30.200 "name": "NewBaseBdev", 00:16:30.200 "uuid": "d6e748d2-dc1b-4513-9b20-0fb5c54d930d", 00:16:30.200 "is_configured": true, 00:16:30.200 "data_offset": 2048, 00:16:30.200 "data_size": 63488 00:16:30.200 }, 00:16:30.200 { 00:16:30.200 "name": "BaseBdev2", 00:16:30.200 "uuid": "a066bece-767b-49cb-89fa-b9a8b8650da4", 00:16:30.200 "is_configured": true, 00:16:30.200 "data_offset": 2048, 00:16:30.200 "data_size": 63488 00:16:30.200 }, 00:16:30.200 { 00:16:30.200 "name": "BaseBdev3", 00:16:30.200 "uuid": "572f2045-b16f-434e-891c-363cc6236e36", 00:16:30.200 "is_configured": true, 00:16:30.200 "data_offset": 2048, 00:16:30.200 "data_size": 63488 00:16:30.200 } 00:16:30.200 ] 00:16:30.200 } 00:16:30.200 } 00:16:30.200 }' 00:16:30.200 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:30.200 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:30.200 BaseBdev2 00:16:30.200 BaseBdev3' 00:16:30.200 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.200 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:30.200 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.200 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:30.200 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.200 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.200 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.200 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.459 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:30.459 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:30.459 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.459 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.459 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:30.459 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.459 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.459 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.459 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:30.459 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:30.459 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.459 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.459 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:30.459 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.459 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.459 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.460 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:30.460 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:30.460 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:30.460 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.460 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.460 [2024-11-26 19:05:56.958472] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.460 [2024-11-26 19:05:56.958661] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:30.460 [2024-11-26 19:05:56.958919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.460 [2024-11-26 19:05:56.959436] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:30.460 [2024-11-26 19:05:56.959578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:30.460 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.460 19:05:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81373 00:16:30.460 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81373 ']' 00:16:30.460 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 81373 00:16:30.460 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:30.460 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:30.460 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81373 00:16:30.460 killing process with pid 81373 00:16:30.460 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:30.460 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:30.460 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81373' 00:16:30.460 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 81373 00:16:30.460 [2024-11-26 19:05:56.999534] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:30.460 19:05:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 81373 00:16:30.718 [2024-11-26 19:05:57.298049] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:32.108 19:05:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:32.108 00:16:32.108 real 0m11.894s 00:16:32.108 user 0m19.358s 00:16:32.108 sys 0m1.815s 00:16:32.108 19:05:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:32.108 ************************************ 00:16:32.108 END TEST raid5f_state_function_test_sb 00:16:32.108 ************************************ 00:16:32.108 19:05:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.108 19:05:58 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:16:32.108 19:05:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:32.108 19:05:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:32.108 19:05:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:32.108 ************************************ 00:16:32.108 START TEST raid5f_superblock_test 00:16:32.108 ************************************ 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=82000 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 82000 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 82000 ']' 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.108 19:05:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.108 [2024-11-26 19:05:58.639014] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:16:32.108 [2024-11-26 19:05:58.639220] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82000 ] 00:16:32.367 [2024-11-26 19:05:58.822085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.367 [2024-11-26 19:05:58.970110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.626 [2024-11-26 19:05:59.195919] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:32.626 [2024-11-26 19:05:59.195992] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.190 malloc1 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.190 [2024-11-26 19:05:59.682577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:33.190 [2024-11-26 19:05:59.682870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.190 [2024-11-26 19:05:59.682954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:33.190 [2024-11-26 19:05:59.683226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.190 [2024-11-26 19:05:59.686443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.190 [2024-11-26 19:05:59.686635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:33.190 pt1 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.190 malloc2 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.190 [2024-11-26 19:05:59.743142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:33.190 [2024-11-26 19:05:59.743414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.190 [2024-11-26 19:05:59.743474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:33.190 [2024-11-26 19:05:59.743491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.190 [2024-11-26 19:05:59.746653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.190 [2024-11-26 19:05:59.746702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:33.190 pt2 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:33.190 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.191 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.191 malloc3 00:16:33.191 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.191 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:33.191 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.449 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.449 [2024-11-26 19:05:59.816689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:33.449 [2024-11-26 19:05:59.817467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.449 [2024-11-26 19:05:59.817557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:33.449 [2024-11-26 19:05:59.817676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.449 [2024-11-26 19:05:59.820911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.449 [2024-11-26 19:05:59.821078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:33.449 pt3 00:16:33.449 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.449 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:33.449 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:33.449 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:16:33.449 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.449 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.449 [2024-11-26 19:05:59.829500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:33.449 [2024-11-26 19:05:59.832116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:33.449 [2024-11-26 19:05:59.832393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:33.449 [2024-11-26 19:05:59.832659] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:33.449 [2024-11-26 19:05:59.832689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:33.449 [2024-11-26 19:05:59.833082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:33.449 [2024-11-26 19:05:59.838371] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:33.449 [2024-11-26 19:05:59.838525] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:33.449 [2024-11-26 19:05:59.838877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.449 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.449 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:33.449 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.449 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.449 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.449 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.449 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.449 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.449 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.449 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.449 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.449 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.449 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.449 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.450 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.450 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.450 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.450 "name": "raid_bdev1", 00:16:33.450 "uuid": "00add759-1c36-43d8-9af0-66a358f6b8ce", 00:16:33.450 "strip_size_kb": 64, 00:16:33.450 "state": "online", 00:16:33.450 "raid_level": "raid5f", 00:16:33.450 "superblock": true, 00:16:33.450 "num_base_bdevs": 3, 00:16:33.450 "num_base_bdevs_discovered": 3, 00:16:33.450 "num_base_bdevs_operational": 3, 00:16:33.450 "base_bdevs_list": [ 00:16:33.450 { 00:16:33.450 "name": "pt1", 00:16:33.450 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:33.450 "is_configured": true, 00:16:33.450 "data_offset": 2048, 00:16:33.450 "data_size": 63488 00:16:33.450 }, 00:16:33.450 { 00:16:33.450 "name": "pt2", 00:16:33.450 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.450 "is_configured": true, 00:16:33.450 "data_offset": 2048, 00:16:33.450 "data_size": 63488 00:16:33.450 }, 00:16:33.450 { 00:16:33.450 "name": "pt3", 00:16:33.450 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:33.450 "is_configured": true, 00:16:33.450 "data_offset": 2048, 00:16:33.450 "data_size": 63488 00:16:33.450 } 00:16:33.450 ] 00:16:33.450 }' 00:16:33.450 19:05:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.450 19:05:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:34.014 [2024-11-26 19:06:00.385681] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:34.014 "name": "raid_bdev1", 00:16:34.014 "aliases": [ 00:16:34.014 "00add759-1c36-43d8-9af0-66a358f6b8ce" 00:16:34.014 ], 00:16:34.014 "product_name": "Raid Volume", 00:16:34.014 "block_size": 512, 00:16:34.014 "num_blocks": 126976, 00:16:34.014 "uuid": "00add759-1c36-43d8-9af0-66a358f6b8ce", 00:16:34.014 "assigned_rate_limits": { 00:16:34.014 "rw_ios_per_sec": 0, 00:16:34.014 "rw_mbytes_per_sec": 0, 00:16:34.014 "r_mbytes_per_sec": 0, 00:16:34.014 "w_mbytes_per_sec": 0 00:16:34.014 }, 00:16:34.014 "claimed": false, 00:16:34.014 "zoned": false, 00:16:34.014 "supported_io_types": { 00:16:34.014 "read": true, 00:16:34.014 "write": true, 00:16:34.014 "unmap": false, 00:16:34.014 "flush": false, 00:16:34.014 "reset": true, 00:16:34.014 "nvme_admin": false, 00:16:34.014 "nvme_io": false, 00:16:34.014 "nvme_io_md": false, 00:16:34.014 "write_zeroes": true, 00:16:34.014 "zcopy": false, 00:16:34.014 "get_zone_info": false, 00:16:34.014 "zone_management": false, 00:16:34.014 "zone_append": false, 00:16:34.014 "compare": false, 00:16:34.014 "compare_and_write": false, 00:16:34.014 "abort": false, 00:16:34.014 "seek_hole": false, 00:16:34.014 "seek_data": false, 00:16:34.014 "copy": false, 00:16:34.014 "nvme_iov_md": false 00:16:34.014 }, 00:16:34.014 "driver_specific": { 00:16:34.014 "raid": { 00:16:34.014 "uuid": "00add759-1c36-43d8-9af0-66a358f6b8ce", 00:16:34.014 "strip_size_kb": 64, 00:16:34.014 "state": "online", 00:16:34.014 "raid_level": "raid5f", 00:16:34.014 "superblock": true, 00:16:34.014 "num_base_bdevs": 3, 00:16:34.014 "num_base_bdevs_discovered": 3, 00:16:34.014 "num_base_bdevs_operational": 3, 00:16:34.014 "base_bdevs_list": [ 00:16:34.014 { 00:16:34.014 "name": "pt1", 00:16:34.014 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:34.014 "is_configured": true, 00:16:34.014 "data_offset": 2048, 00:16:34.014 "data_size": 63488 00:16:34.014 }, 00:16:34.014 { 00:16:34.014 "name": "pt2", 00:16:34.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:34.014 "is_configured": true, 00:16:34.014 "data_offset": 2048, 00:16:34.014 "data_size": 63488 00:16:34.014 }, 00:16:34.014 { 00:16:34.014 "name": "pt3", 00:16:34.014 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:34.014 "is_configured": true, 00:16:34.014 "data_offset": 2048, 00:16:34.014 "data_size": 63488 00:16:34.014 } 00:16:34.014 ] 00:16:34.014 } 00:16:34.014 } 00:16:34.014 }' 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:34.014 pt2 00:16:34.014 pt3' 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.014 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.272 [2024-11-26 19:06:00.693737] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=00add759-1c36-43d8-9af0-66a358f6b8ce 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 00add759-1c36-43d8-9af0-66a358f6b8ce ']' 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.272 [2024-11-26 19:06:00.749512] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:34.272 [2024-11-26 19:06:00.749575] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:34.272 [2024-11-26 19:06:00.749696] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:34.272 [2024-11-26 19:06:00.749810] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:34.272 [2024-11-26 19:06:00.749827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.272 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.272 [2024-11-26 19:06:00.881624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:34.272 [2024-11-26 19:06:00.884350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:34.272 [2024-11-26 19:06:00.884456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:34.272 [2024-11-26 19:06:00.884549] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:34.272 [2024-11-26 19:06:00.884630] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:34.272 [2024-11-26 19:06:00.884664] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:34.272 [2024-11-26 19:06:00.884692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:34.272 [2024-11-26 19:06:00.884706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:34.272 request: 00:16:34.272 { 00:16:34.272 "name": "raid_bdev1", 00:16:34.272 "raid_level": "raid5f", 00:16:34.272 "base_bdevs": [ 00:16:34.272 "malloc1", 00:16:34.272 "malloc2", 00:16:34.272 "malloc3" 00:16:34.272 ], 00:16:34.272 "strip_size_kb": 64, 00:16:34.272 "superblock": false, 00:16:34.272 "method": "bdev_raid_create", 00:16:34.272 "req_id": 1 00:16:34.272 } 00:16:34.272 Got JSON-RPC error response 00:16:34.272 response: 00:16:34.272 { 00:16:34.273 "code": -17, 00:16:34.273 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:34.273 } 00:16:34.273 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:34.273 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:34.273 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:34.273 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:34.273 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:34.530 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.530 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.530 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.530 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:34.530 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.530 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:34.530 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:34.530 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:34.530 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.530 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.530 [2024-11-26 19:06:00.941617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:34.530 [2024-11-26 19:06:00.941717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.530 [2024-11-26 19:06:00.941760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:34.530 [2024-11-26 19:06:00.941775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.530 [2024-11-26 19:06:00.945029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.530 [2024-11-26 19:06:00.945094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:34.530 [2024-11-26 19:06:00.945239] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:34.530 [2024-11-26 19:06:00.945336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:34.530 pt1 00:16:34.530 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.531 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:34.531 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.531 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.531 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.531 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.531 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.531 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.531 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.531 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.531 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.531 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.531 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.531 19:06:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.531 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.531 19:06:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.531 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.531 "name": "raid_bdev1", 00:16:34.531 "uuid": "00add759-1c36-43d8-9af0-66a358f6b8ce", 00:16:34.531 "strip_size_kb": 64, 00:16:34.531 "state": "configuring", 00:16:34.531 "raid_level": "raid5f", 00:16:34.531 "superblock": true, 00:16:34.531 "num_base_bdevs": 3, 00:16:34.531 "num_base_bdevs_discovered": 1, 00:16:34.531 "num_base_bdevs_operational": 3, 00:16:34.531 "base_bdevs_list": [ 00:16:34.531 { 00:16:34.531 "name": "pt1", 00:16:34.531 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:34.531 "is_configured": true, 00:16:34.531 "data_offset": 2048, 00:16:34.531 "data_size": 63488 00:16:34.531 }, 00:16:34.531 { 00:16:34.531 "name": null, 00:16:34.531 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:34.531 "is_configured": false, 00:16:34.531 "data_offset": 2048, 00:16:34.531 "data_size": 63488 00:16:34.531 }, 00:16:34.531 { 00:16:34.531 "name": null, 00:16:34.531 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:34.531 "is_configured": false, 00:16:34.531 "data_offset": 2048, 00:16:34.531 "data_size": 63488 00:16:34.531 } 00:16:34.531 ] 00:16:34.531 }' 00:16:34.531 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.531 19:06:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.097 [2024-11-26 19:06:01.497800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:35.097 [2024-11-26 19:06:01.497905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.097 [2024-11-26 19:06:01.497944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:35.097 [2024-11-26 19:06:01.497960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.097 [2024-11-26 19:06:01.498607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.097 [2024-11-26 19:06:01.498653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:35.097 [2024-11-26 19:06:01.498783] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:35.097 [2024-11-26 19:06:01.498826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:35.097 pt2 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.097 [2024-11-26 19:06:01.505830] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.097 "name": "raid_bdev1", 00:16:35.097 "uuid": "00add759-1c36-43d8-9af0-66a358f6b8ce", 00:16:35.097 "strip_size_kb": 64, 00:16:35.097 "state": "configuring", 00:16:35.097 "raid_level": "raid5f", 00:16:35.097 "superblock": true, 00:16:35.097 "num_base_bdevs": 3, 00:16:35.097 "num_base_bdevs_discovered": 1, 00:16:35.097 "num_base_bdevs_operational": 3, 00:16:35.097 "base_bdevs_list": [ 00:16:35.097 { 00:16:35.097 "name": "pt1", 00:16:35.097 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:35.097 "is_configured": true, 00:16:35.097 "data_offset": 2048, 00:16:35.097 "data_size": 63488 00:16:35.097 }, 00:16:35.097 { 00:16:35.097 "name": null, 00:16:35.097 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.097 "is_configured": false, 00:16:35.097 "data_offset": 0, 00:16:35.097 "data_size": 63488 00:16:35.097 }, 00:16:35.097 { 00:16:35.097 "name": null, 00:16:35.097 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:35.097 "is_configured": false, 00:16:35.097 "data_offset": 2048, 00:16:35.097 "data_size": 63488 00:16:35.097 } 00:16:35.097 ] 00:16:35.097 }' 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.097 19:06:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.665 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:35.665 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:35.665 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:35.665 19:06:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.665 19:06:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.665 [2024-11-26 19:06:01.989885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:35.665 [2024-11-26 19:06:01.989995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.665 [2024-11-26 19:06:01.990027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:35.665 [2024-11-26 19:06:01.990045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.665 [2024-11-26 19:06:01.990766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.665 [2024-11-26 19:06:01.990799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:35.665 [2024-11-26 19:06:01.990916] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:35.665 [2024-11-26 19:06:01.990957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:35.665 pt2 00:16:35.665 19:06:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.665 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:35.665 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:35.665 19:06:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:35.665 19:06:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.665 19:06:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.665 [2024-11-26 19:06:02.001930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:35.665 [2024-11-26 19:06:02.002032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.665 [2024-11-26 19:06:02.002062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:35.665 [2024-11-26 19:06:02.002080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.665 [2024-11-26 19:06:02.002780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.665 [2024-11-26 19:06:02.002835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:35.665 [2024-11-26 19:06:02.002958] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:35.665 [2024-11-26 19:06:02.003000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:35.665 [2024-11-26 19:06:02.003190] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:35.665 [2024-11-26 19:06:02.003222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:35.665 [2024-11-26 19:06:02.003617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:35.665 [2024-11-26 19:06:02.008988] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:35.665 [2024-11-26 19:06:02.009040] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:35.665 [2024-11-26 19:06:02.009384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.665 pt3 00:16:35.665 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.665 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:35.665 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:35.665 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:35.665 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.665 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.665 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.665 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.665 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.665 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.665 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.665 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.665 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.665 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.665 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.665 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.665 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.665 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.665 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.665 "name": "raid_bdev1", 00:16:35.665 "uuid": "00add759-1c36-43d8-9af0-66a358f6b8ce", 00:16:35.665 "strip_size_kb": 64, 00:16:35.665 "state": "online", 00:16:35.665 "raid_level": "raid5f", 00:16:35.665 "superblock": true, 00:16:35.665 "num_base_bdevs": 3, 00:16:35.665 "num_base_bdevs_discovered": 3, 00:16:35.665 "num_base_bdevs_operational": 3, 00:16:35.665 "base_bdevs_list": [ 00:16:35.665 { 00:16:35.665 "name": "pt1", 00:16:35.665 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:35.665 "is_configured": true, 00:16:35.665 "data_offset": 2048, 00:16:35.665 "data_size": 63488 00:16:35.665 }, 00:16:35.665 { 00:16:35.665 "name": "pt2", 00:16:35.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.665 "is_configured": true, 00:16:35.665 "data_offset": 2048, 00:16:35.665 "data_size": 63488 00:16:35.665 }, 00:16:35.665 { 00:16:35.665 "name": "pt3", 00:16:35.665 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:35.665 "is_configured": true, 00:16:35.665 "data_offset": 2048, 00:16:35.665 "data_size": 63488 00:16:35.665 } 00:16:35.665 ] 00:16:35.665 }' 00:16:35.665 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.665 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.923 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:35.923 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:35.923 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:35.923 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:35.923 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:35.923 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:35.923 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:35.923 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:35.923 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.923 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.923 [2024-11-26 19:06:02.540255] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:36.182 "name": "raid_bdev1", 00:16:36.182 "aliases": [ 00:16:36.182 "00add759-1c36-43d8-9af0-66a358f6b8ce" 00:16:36.182 ], 00:16:36.182 "product_name": "Raid Volume", 00:16:36.182 "block_size": 512, 00:16:36.182 "num_blocks": 126976, 00:16:36.182 "uuid": "00add759-1c36-43d8-9af0-66a358f6b8ce", 00:16:36.182 "assigned_rate_limits": { 00:16:36.182 "rw_ios_per_sec": 0, 00:16:36.182 "rw_mbytes_per_sec": 0, 00:16:36.182 "r_mbytes_per_sec": 0, 00:16:36.182 "w_mbytes_per_sec": 0 00:16:36.182 }, 00:16:36.182 "claimed": false, 00:16:36.182 "zoned": false, 00:16:36.182 "supported_io_types": { 00:16:36.182 "read": true, 00:16:36.182 "write": true, 00:16:36.182 "unmap": false, 00:16:36.182 "flush": false, 00:16:36.182 "reset": true, 00:16:36.182 "nvme_admin": false, 00:16:36.182 "nvme_io": false, 00:16:36.182 "nvme_io_md": false, 00:16:36.182 "write_zeroes": true, 00:16:36.182 "zcopy": false, 00:16:36.182 "get_zone_info": false, 00:16:36.182 "zone_management": false, 00:16:36.182 "zone_append": false, 00:16:36.182 "compare": false, 00:16:36.182 "compare_and_write": false, 00:16:36.182 "abort": false, 00:16:36.182 "seek_hole": false, 00:16:36.182 "seek_data": false, 00:16:36.182 "copy": false, 00:16:36.182 "nvme_iov_md": false 00:16:36.182 }, 00:16:36.182 "driver_specific": { 00:16:36.182 "raid": { 00:16:36.182 "uuid": "00add759-1c36-43d8-9af0-66a358f6b8ce", 00:16:36.182 "strip_size_kb": 64, 00:16:36.182 "state": "online", 00:16:36.182 "raid_level": "raid5f", 00:16:36.182 "superblock": true, 00:16:36.182 "num_base_bdevs": 3, 00:16:36.182 "num_base_bdevs_discovered": 3, 00:16:36.182 "num_base_bdevs_operational": 3, 00:16:36.182 "base_bdevs_list": [ 00:16:36.182 { 00:16:36.182 "name": "pt1", 00:16:36.182 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:36.182 "is_configured": true, 00:16:36.182 "data_offset": 2048, 00:16:36.182 "data_size": 63488 00:16:36.182 }, 00:16:36.182 { 00:16:36.182 "name": "pt2", 00:16:36.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.182 "is_configured": true, 00:16:36.182 "data_offset": 2048, 00:16:36.182 "data_size": 63488 00:16:36.182 }, 00:16:36.182 { 00:16:36.182 "name": "pt3", 00:16:36.182 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:36.182 "is_configured": true, 00:16:36.182 "data_offset": 2048, 00:16:36.182 "data_size": 63488 00:16:36.182 } 00:16:36.182 ] 00:16:36.182 } 00:16:36.182 } 00:16:36.182 }' 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:36.182 pt2 00:16:36.182 pt3' 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.182 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:36.442 [2024-11-26 19:06:02.840339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 00add759-1c36-43d8-9af0-66a358f6b8ce '!=' 00add759-1c36-43d8-9af0-66a358f6b8ce ']' 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.442 [2024-11-26 19:06:02.900136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.442 "name": "raid_bdev1", 00:16:36.442 "uuid": "00add759-1c36-43d8-9af0-66a358f6b8ce", 00:16:36.442 "strip_size_kb": 64, 00:16:36.442 "state": "online", 00:16:36.442 "raid_level": "raid5f", 00:16:36.442 "superblock": true, 00:16:36.442 "num_base_bdevs": 3, 00:16:36.442 "num_base_bdevs_discovered": 2, 00:16:36.442 "num_base_bdevs_operational": 2, 00:16:36.442 "base_bdevs_list": [ 00:16:36.442 { 00:16:36.442 "name": null, 00:16:36.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.442 "is_configured": false, 00:16:36.442 "data_offset": 0, 00:16:36.442 "data_size": 63488 00:16:36.442 }, 00:16:36.442 { 00:16:36.442 "name": "pt2", 00:16:36.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.442 "is_configured": true, 00:16:36.442 "data_offset": 2048, 00:16:36.442 "data_size": 63488 00:16:36.442 }, 00:16:36.442 { 00:16:36.442 "name": "pt3", 00:16:36.442 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:36.442 "is_configured": true, 00:16:36.442 "data_offset": 2048, 00:16:36.442 "data_size": 63488 00:16:36.442 } 00:16:36.442 ] 00:16:36.442 }' 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.442 19:06:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.011 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:37.011 19:06:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.011 19:06:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.011 [2024-11-26 19:06:03.420197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:37.011 [2024-11-26 19:06:03.420240] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:37.011 [2024-11-26 19:06:03.420365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.011 [2024-11-26 19:06:03.420454] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.012 [2024-11-26 19:06:03.420479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.012 [2024-11-26 19:06:03.508237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:37.012 [2024-11-26 19:06:03.508355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.012 [2024-11-26 19:06:03.508386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:37.012 [2024-11-26 19:06:03.508404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.012 [2024-11-26 19:06:03.511538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.012 [2024-11-26 19:06:03.511597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:37.012 [2024-11-26 19:06:03.511723] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:37.012 [2024-11-26 19:06:03.511794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:37.012 pt2 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.012 "name": "raid_bdev1", 00:16:37.012 "uuid": "00add759-1c36-43d8-9af0-66a358f6b8ce", 00:16:37.012 "strip_size_kb": 64, 00:16:37.012 "state": "configuring", 00:16:37.012 "raid_level": "raid5f", 00:16:37.012 "superblock": true, 00:16:37.012 "num_base_bdevs": 3, 00:16:37.012 "num_base_bdevs_discovered": 1, 00:16:37.012 "num_base_bdevs_operational": 2, 00:16:37.012 "base_bdevs_list": [ 00:16:37.012 { 00:16:37.012 "name": null, 00:16:37.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.012 "is_configured": false, 00:16:37.012 "data_offset": 2048, 00:16:37.012 "data_size": 63488 00:16:37.012 }, 00:16:37.012 { 00:16:37.012 "name": "pt2", 00:16:37.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.012 "is_configured": true, 00:16:37.012 "data_offset": 2048, 00:16:37.012 "data_size": 63488 00:16:37.012 }, 00:16:37.012 { 00:16:37.012 "name": null, 00:16:37.012 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:37.012 "is_configured": false, 00:16:37.012 "data_offset": 2048, 00:16:37.012 "data_size": 63488 00:16:37.012 } 00:16:37.012 ] 00:16:37.012 }' 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.012 19:06:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.579 [2024-11-26 19:06:04.048363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:37.579 [2024-11-26 19:06:04.048488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.579 [2024-11-26 19:06:04.048531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:37.579 [2024-11-26 19:06:04.048554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.579 [2024-11-26 19:06:04.049477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.579 [2024-11-26 19:06:04.049527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:37.579 [2024-11-26 19:06:04.049646] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:37.579 [2024-11-26 19:06:04.049692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:37.579 [2024-11-26 19:06:04.049861] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:37.579 [2024-11-26 19:06:04.049883] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:37.579 [2024-11-26 19:06:04.050331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:37.579 [2024-11-26 19:06:04.055544] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:37.579 [2024-11-26 19:06:04.055583] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:37.579 [2024-11-26 19:06:04.056019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.579 pt3 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.579 "name": "raid_bdev1", 00:16:37.579 "uuid": "00add759-1c36-43d8-9af0-66a358f6b8ce", 00:16:37.579 "strip_size_kb": 64, 00:16:37.579 "state": "online", 00:16:37.579 "raid_level": "raid5f", 00:16:37.579 "superblock": true, 00:16:37.579 "num_base_bdevs": 3, 00:16:37.579 "num_base_bdevs_discovered": 2, 00:16:37.579 "num_base_bdevs_operational": 2, 00:16:37.579 "base_bdevs_list": [ 00:16:37.579 { 00:16:37.579 "name": null, 00:16:37.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.579 "is_configured": false, 00:16:37.579 "data_offset": 2048, 00:16:37.579 "data_size": 63488 00:16:37.579 }, 00:16:37.579 { 00:16:37.579 "name": "pt2", 00:16:37.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.579 "is_configured": true, 00:16:37.579 "data_offset": 2048, 00:16:37.579 "data_size": 63488 00:16:37.579 }, 00:16:37.579 { 00:16:37.579 "name": "pt3", 00:16:37.579 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:37.579 "is_configured": true, 00:16:37.579 "data_offset": 2048, 00:16:37.579 "data_size": 63488 00:16:37.579 } 00:16:37.579 ] 00:16:37.579 }' 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.579 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.145 [2024-11-26 19:06:04.570368] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:38.145 [2024-11-26 19:06:04.570423] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:38.145 [2024-11-26 19:06:04.570536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.145 [2024-11-26 19:06:04.570632] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:38.145 [2024-11-26 19:06:04.570649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.145 [2024-11-26 19:06:04.650471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:38.145 [2024-11-26 19:06:04.650568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.145 [2024-11-26 19:06:04.650602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:38.145 [2024-11-26 19:06:04.650617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.145 [2024-11-26 19:06:04.653887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.145 [2024-11-26 19:06:04.653947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:38.145 [2024-11-26 19:06:04.654112] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:38.145 [2024-11-26 19:06:04.654187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:38.145 [2024-11-26 19:06:04.654408] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:38.145 [2024-11-26 19:06:04.654429] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:38.145 [2024-11-26 19:06:04.654454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:38.145 [2024-11-26 19:06:04.654523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:38.145 pt1 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.145 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.145 "name": "raid_bdev1", 00:16:38.145 "uuid": "00add759-1c36-43d8-9af0-66a358f6b8ce", 00:16:38.145 "strip_size_kb": 64, 00:16:38.145 "state": "configuring", 00:16:38.145 "raid_level": "raid5f", 00:16:38.145 "superblock": true, 00:16:38.145 "num_base_bdevs": 3, 00:16:38.145 "num_base_bdevs_discovered": 1, 00:16:38.145 "num_base_bdevs_operational": 2, 00:16:38.145 "base_bdevs_list": [ 00:16:38.145 { 00:16:38.145 "name": null, 00:16:38.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.145 "is_configured": false, 00:16:38.145 "data_offset": 2048, 00:16:38.145 "data_size": 63488 00:16:38.145 }, 00:16:38.145 { 00:16:38.145 "name": "pt2", 00:16:38.145 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.145 "is_configured": true, 00:16:38.145 "data_offset": 2048, 00:16:38.145 "data_size": 63488 00:16:38.145 }, 00:16:38.145 { 00:16:38.145 "name": null, 00:16:38.145 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:38.145 "is_configured": false, 00:16:38.145 "data_offset": 2048, 00:16:38.145 "data_size": 63488 00:16:38.145 } 00:16:38.145 ] 00:16:38.146 }' 00:16:38.146 19:06:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.146 19:06:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.711 [2024-11-26 19:06:05.234738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:38.711 [2024-11-26 19:06:05.234832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.711 [2024-11-26 19:06:05.234867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:38.711 [2024-11-26 19:06:05.234883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.711 [2024-11-26 19:06:05.235571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.711 [2024-11-26 19:06:05.235613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:38.711 [2024-11-26 19:06:05.235742] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:38.711 [2024-11-26 19:06:05.235777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:38.711 [2024-11-26 19:06:05.235944] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:38.711 [2024-11-26 19:06:05.235961] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:38.711 [2024-11-26 19:06:05.236278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:38.711 [2024-11-26 19:06:05.241306] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:38.711 [2024-11-26 19:06:05.241349] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:38.711 [2024-11-26 19:06:05.241709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.711 pt3 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.711 "name": "raid_bdev1", 00:16:38.711 "uuid": "00add759-1c36-43d8-9af0-66a358f6b8ce", 00:16:38.711 "strip_size_kb": 64, 00:16:38.711 "state": "online", 00:16:38.711 "raid_level": "raid5f", 00:16:38.711 "superblock": true, 00:16:38.711 "num_base_bdevs": 3, 00:16:38.711 "num_base_bdevs_discovered": 2, 00:16:38.711 "num_base_bdevs_operational": 2, 00:16:38.711 "base_bdevs_list": [ 00:16:38.711 { 00:16:38.711 "name": null, 00:16:38.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.711 "is_configured": false, 00:16:38.711 "data_offset": 2048, 00:16:38.711 "data_size": 63488 00:16:38.711 }, 00:16:38.711 { 00:16:38.711 "name": "pt2", 00:16:38.711 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.711 "is_configured": true, 00:16:38.711 "data_offset": 2048, 00:16:38.711 "data_size": 63488 00:16:38.711 }, 00:16:38.711 { 00:16:38.711 "name": "pt3", 00:16:38.711 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:38.711 "is_configured": true, 00:16:38.711 "data_offset": 2048, 00:16:38.711 "data_size": 63488 00:16:38.711 } 00:16:38.711 ] 00:16:38.711 }' 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.711 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.277 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:39.277 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:39.277 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.277 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.277 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.277 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:39.277 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:39.277 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.277 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.277 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:39.277 [2024-11-26 19:06:05.824258] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.277 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.277 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 00add759-1c36-43d8-9af0-66a358f6b8ce '!=' 00add759-1c36-43d8-9af0-66a358f6b8ce ']' 00:16:39.277 19:06:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 82000 00:16:39.277 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 82000 ']' 00:16:39.277 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 82000 00:16:39.277 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:39.277 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.277 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82000 00:16:39.535 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:39.535 killing process with pid 82000 00:16:39.535 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:39.535 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82000' 00:16:39.535 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 82000 00:16:39.535 19:06:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 82000 00:16:39.535 [2024-11-26 19:06:05.904784] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:39.535 [2024-11-26 19:06:05.904928] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.535 [2024-11-26 19:06:05.905018] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:39.535 [2024-11-26 19:06:05.905037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:39.793 [2024-11-26 19:06:06.207507] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:41.168 19:06:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:41.168 00:16:41.168 real 0m8.839s 00:16:41.168 user 0m14.182s 00:16:41.168 sys 0m1.392s 00:16:41.168 19:06:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.168 ************************************ 00:16:41.168 END TEST raid5f_superblock_test 00:16:41.168 ************************************ 00:16:41.168 19:06:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.168 19:06:07 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:41.168 19:06:07 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:16:41.168 19:06:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:41.168 19:06:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.168 19:06:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:41.168 ************************************ 00:16:41.168 START TEST raid5f_rebuild_test 00:16:41.168 ************************************ 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82454 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82454 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 82454 ']' 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.168 19:06:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.168 [2024-11-26 19:06:07.531789] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:16:41.168 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:41.168 Zero copy mechanism will not be used. 00:16:41.169 [2024-11-26 19:06:07.531990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82454 ] 00:16:41.169 [2024-11-26 19:06:07.711757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.426 [2024-11-26 19:06:07.860029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.684 [2024-11-26 19:06:08.086001] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.684 [2024-11-26 19:06:08.086084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.943 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:41.943 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:41.943 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:41.943 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:41.943 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.943 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.943 BaseBdev1_malloc 00:16:41.943 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.943 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:41.943 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.943 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.943 [2024-11-26 19:06:08.536745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:41.943 [2024-11-26 19:06:08.536896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.943 [2024-11-26 19:06:08.536930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:41.943 [2024-11-26 19:06:08.536949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.943 [2024-11-26 19:06:08.539976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.943 [2024-11-26 19:06:08.540043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:41.943 BaseBdev1 00:16:41.943 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.943 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:41.943 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:41.943 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.943 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.201 BaseBdev2_malloc 00:16:42.201 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.201 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:42.201 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.201 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.201 [2024-11-26 19:06:08.595251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:42.201 [2024-11-26 19:06:08.595408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.201 [2024-11-26 19:06:08.595443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:42.201 [2024-11-26 19:06:08.595461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.201 [2024-11-26 19:06:08.598429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.201 [2024-11-26 19:06:08.598477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:42.201 BaseBdev2 00:16:42.201 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.201 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:42.201 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:42.201 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.201 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.201 BaseBdev3_malloc 00:16:42.201 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.201 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:42.201 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.201 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.201 [2024-11-26 19:06:08.666632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:42.201 [2024-11-26 19:06:08.666743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.201 [2024-11-26 19:06:08.666777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:42.201 [2024-11-26 19:06:08.666802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.201 [2024-11-26 19:06:08.669779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.201 [2024-11-26 19:06:08.669863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:42.201 BaseBdev3 00:16:42.201 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.202 spare_malloc 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.202 spare_delay 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.202 [2024-11-26 19:06:08.730211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:42.202 [2024-11-26 19:06:08.730346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.202 [2024-11-26 19:06:08.730375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:42.202 [2024-11-26 19:06:08.730393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.202 [2024-11-26 19:06:08.733537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.202 [2024-11-26 19:06:08.733608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:42.202 spare 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.202 [2024-11-26 19:06:08.738454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.202 [2024-11-26 19:06:08.741106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:42.202 [2024-11-26 19:06:08.741214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:42.202 [2024-11-26 19:06:08.741385] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:42.202 [2024-11-26 19:06:08.741406] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:42.202 [2024-11-26 19:06:08.741740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:42.202 [2024-11-26 19:06:08.747095] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:42.202 [2024-11-26 19:06:08.747132] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:42.202 [2024-11-26 19:06:08.747397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.202 "name": "raid_bdev1", 00:16:42.202 "uuid": "d2b9a5d9-95a0-4e1d-8b5e-4f0c7e74f920", 00:16:42.202 "strip_size_kb": 64, 00:16:42.202 "state": "online", 00:16:42.202 "raid_level": "raid5f", 00:16:42.202 "superblock": false, 00:16:42.202 "num_base_bdevs": 3, 00:16:42.202 "num_base_bdevs_discovered": 3, 00:16:42.202 "num_base_bdevs_operational": 3, 00:16:42.202 "base_bdevs_list": [ 00:16:42.202 { 00:16:42.202 "name": "BaseBdev1", 00:16:42.202 "uuid": "face9d9e-6abf-5cd8-bcc7-361ad094eee9", 00:16:42.202 "is_configured": true, 00:16:42.202 "data_offset": 0, 00:16:42.202 "data_size": 65536 00:16:42.202 }, 00:16:42.202 { 00:16:42.202 "name": "BaseBdev2", 00:16:42.202 "uuid": "bacc63e3-bd1b-5dd3-aba3-805e1ca615d1", 00:16:42.202 "is_configured": true, 00:16:42.202 "data_offset": 0, 00:16:42.202 "data_size": 65536 00:16:42.202 }, 00:16:42.202 { 00:16:42.202 "name": "BaseBdev3", 00:16:42.202 "uuid": "7cf08fd2-e07b-575b-b420-6aad0ef49e23", 00:16:42.202 "is_configured": true, 00:16:42.202 "data_offset": 0, 00:16:42.202 "data_size": 65536 00:16:42.202 } 00:16:42.202 ] 00:16:42.202 }' 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.202 19:06:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.769 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:42.769 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:42.769 19:06:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.769 19:06:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.769 [2024-11-26 19:06:09.270058] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:42.769 19:06:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.769 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:16:42.769 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.769 19:06:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.769 19:06:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.769 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:42.769 19:06:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.769 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:42.769 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:42.769 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:42.769 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:42.769 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:42.769 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:42.770 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:42.770 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:42.770 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:42.770 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:42.770 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:42.770 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:42.770 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:42.770 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:43.341 [2024-11-26 19:06:09.661958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:43.341 /dev/nbd0 00:16:43.341 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:43.341 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:43.341 19:06:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:43.341 19:06:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:43.341 19:06:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:43.341 19:06:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:43.341 19:06:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:43.341 19:06:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:43.341 19:06:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:43.341 19:06:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:43.341 19:06:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:43.341 1+0 records in 00:16:43.341 1+0 records out 00:16:43.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356582 s, 11.5 MB/s 00:16:43.341 19:06:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:43.341 19:06:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:43.341 19:06:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:43.341 19:06:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:43.341 19:06:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:43.341 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:43.341 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:43.341 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:43.341 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:43.341 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:43.341 19:06:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:16:43.908 512+0 records in 00:16:43.908 512+0 records out 00:16:43.908 67108864 bytes (67 MB, 64 MiB) copied, 0.505315 s, 133 MB/s 00:16:43.908 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:43.908 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:43.908 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:43.908 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:43.908 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:43.908 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:43.908 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:43.908 [2024-11-26 19:06:10.516147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.908 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.167 [2024-11-26 19:06:10.538595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.167 "name": "raid_bdev1", 00:16:44.167 "uuid": "d2b9a5d9-95a0-4e1d-8b5e-4f0c7e74f920", 00:16:44.167 "strip_size_kb": 64, 00:16:44.167 "state": "online", 00:16:44.167 "raid_level": "raid5f", 00:16:44.167 "superblock": false, 00:16:44.167 "num_base_bdevs": 3, 00:16:44.167 "num_base_bdevs_discovered": 2, 00:16:44.167 "num_base_bdevs_operational": 2, 00:16:44.167 "base_bdevs_list": [ 00:16:44.167 { 00:16:44.167 "name": null, 00:16:44.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.167 "is_configured": false, 00:16:44.167 "data_offset": 0, 00:16:44.167 "data_size": 65536 00:16:44.167 }, 00:16:44.167 { 00:16:44.167 "name": "BaseBdev2", 00:16:44.167 "uuid": "bacc63e3-bd1b-5dd3-aba3-805e1ca615d1", 00:16:44.167 "is_configured": true, 00:16:44.167 "data_offset": 0, 00:16:44.167 "data_size": 65536 00:16:44.167 }, 00:16:44.167 { 00:16:44.167 "name": "BaseBdev3", 00:16:44.167 "uuid": "7cf08fd2-e07b-575b-b420-6aad0ef49e23", 00:16:44.167 "is_configured": true, 00:16:44.167 "data_offset": 0, 00:16:44.167 "data_size": 65536 00:16:44.167 } 00:16:44.167 ] 00:16:44.167 }' 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.167 19:06:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.733 19:06:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:44.733 19:06:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.733 19:06:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.733 [2024-11-26 19:06:11.106780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:44.733 [2024-11-26 19:06:11.122856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:16:44.733 19:06:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.733 19:06:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:44.733 [2024-11-26 19:06:11.130531] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:45.670 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.670 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.670 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.670 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.670 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.670 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.670 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.670 19:06:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.670 19:06:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.670 19:06:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.670 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.670 "name": "raid_bdev1", 00:16:45.670 "uuid": "d2b9a5d9-95a0-4e1d-8b5e-4f0c7e74f920", 00:16:45.670 "strip_size_kb": 64, 00:16:45.670 "state": "online", 00:16:45.670 "raid_level": "raid5f", 00:16:45.670 "superblock": false, 00:16:45.670 "num_base_bdevs": 3, 00:16:45.670 "num_base_bdevs_discovered": 3, 00:16:45.670 "num_base_bdevs_operational": 3, 00:16:45.670 "process": { 00:16:45.670 "type": "rebuild", 00:16:45.670 "target": "spare", 00:16:45.670 "progress": { 00:16:45.670 "blocks": 18432, 00:16:45.670 "percent": 14 00:16:45.670 } 00:16:45.670 }, 00:16:45.670 "base_bdevs_list": [ 00:16:45.670 { 00:16:45.670 "name": "spare", 00:16:45.670 "uuid": "1705c871-dfe8-565b-b84f-530a4ef6df81", 00:16:45.670 "is_configured": true, 00:16:45.670 "data_offset": 0, 00:16:45.670 "data_size": 65536 00:16:45.670 }, 00:16:45.670 { 00:16:45.670 "name": "BaseBdev2", 00:16:45.670 "uuid": "bacc63e3-bd1b-5dd3-aba3-805e1ca615d1", 00:16:45.671 "is_configured": true, 00:16:45.671 "data_offset": 0, 00:16:45.671 "data_size": 65536 00:16:45.671 }, 00:16:45.671 { 00:16:45.671 "name": "BaseBdev3", 00:16:45.671 "uuid": "7cf08fd2-e07b-575b-b420-6aad0ef49e23", 00:16:45.671 "is_configured": true, 00:16:45.671 "data_offset": 0, 00:16:45.671 "data_size": 65536 00:16:45.671 } 00:16:45.671 ] 00:16:45.671 }' 00:16:45.671 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.671 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.671 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.671 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.671 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:45.671 19:06:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.671 19:06:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.929 [2024-11-26 19:06:12.292811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:45.929 [2024-11-26 19:06:12.349235] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:45.929 [2024-11-26 19:06:12.349358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.929 [2024-11-26 19:06:12.349389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:45.929 [2024-11-26 19:06:12.349402] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:45.929 19:06:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.929 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:45.929 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.929 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.929 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.929 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.929 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:45.929 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.929 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.929 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.929 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.929 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.929 19:06:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.929 19:06:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.929 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.929 19:06:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.929 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.929 "name": "raid_bdev1", 00:16:45.929 "uuid": "d2b9a5d9-95a0-4e1d-8b5e-4f0c7e74f920", 00:16:45.929 "strip_size_kb": 64, 00:16:45.929 "state": "online", 00:16:45.929 "raid_level": "raid5f", 00:16:45.929 "superblock": false, 00:16:45.929 "num_base_bdevs": 3, 00:16:45.929 "num_base_bdevs_discovered": 2, 00:16:45.929 "num_base_bdevs_operational": 2, 00:16:45.929 "base_bdevs_list": [ 00:16:45.929 { 00:16:45.929 "name": null, 00:16:45.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.929 "is_configured": false, 00:16:45.929 "data_offset": 0, 00:16:45.929 "data_size": 65536 00:16:45.929 }, 00:16:45.929 { 00:16:45.929 "name": "BaseBdev2", 00:16:45.929 "uuid": "bacc63e3-bd1b-5dd3-aba3-805e1ca615d1", 00:16:45.929 "is_configured": true, 00:16:45.929 "data_offset": 0, 00:16:45.929 "data_size": 65536 00:16:45.929 }, 00:16:45.929 { 00:16:45.929 "name": "BaseBdev3", 00:16:45.929 "uuid": "7cf08fd2-e07b-575b-b420-6aad0ef49e23", 00:16:45.929 "is_configured": true, 00:16:45.929 "data_offset": 0, 00:16:45.929 "data_size": 65536 00:16:45.929 } 00:16:45.929 ] 00:16:45.929 }' 00:16:45.929 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.929 19:06:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.498 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:46.498 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.498 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:46.498 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:46.498 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.498 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.498 19:06:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.498 19:06:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.498 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.498 19:06:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.498 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.498 "name": "raid_bdev1", 00:16:46.498 "uuid": "d2b9a5d9-95a0-4e1d-8b5e-4f0c7e74f920", 00:16:46.498 "strip_size_kb": 64, 00:16:46.498 "state": "online", 00:16:46.498 "raid_level": "raid5f", 00:16:46.498 "superblock": false, 00:16:46.498 "num_base_bdevs": 3, 00:16:46.498 "num_base_bdevs_discovered": 2, 00:16:46.498 "num_base_bdevs_operational": 2, 00:16:46.498 "base_bdevs_list": [ 00:16:46.498 { 00:16:46.498 "name": null, 00:16:46.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.498 "is_configured": false, 00:16:46.498 "data_offset": 0, 00:16:46.498 "data_size": 65536 00:16:46.498 }, 00:16:46.498 { 00:16:46.498 "name": "BaseBdev2", 00:16:46.498 "uuid": "bacc63e3-bd1b-5dd3-aba3-805e1ca615d1", 00:16:46.498 "is_configured": true, 00:16:46.498 "data_offset": 0, 00:16:46.498 "data_size": 65536 00:16:46.498 }, 00:16:46.498 { 00:16:46.498 "name": "BaseBdev3", 00:16:46.498 "uuid": "7cf08fd2-e07b-575b-b420-6aad0ef49e23", 00:16:46.498 "is_configured": true, 00:16:46.498 "data_offset": 0, 00:16:46.498 "data_size": 65536 00:16:46.498 } 00:16:46.498 ] 00:16:46.498 }' 00:16:46.498 19:06:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.498 19:06:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:46.498 19:06:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.498 19:06:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:46.498 19:06:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:46.498 19:06:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.498 19:06:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.498 [2024-11-26 19:06:13.079576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:46.498 [2024-11-26 19:06:13.094864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:46.498 19:06:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.498 19:06:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:46.498 [2024-11-26 19:06:13.102373] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.876 "name": "raid_bdev1", 00:16:47.876 "uuid": "d2b9a5d9-95a0-4e1d-8b5e-4f0c7e74f920", 00:16:47.876 "strip_size_kb": 64, 00:16:47.876 "state": "online", 00:16:47.876 "raid_level": "raid5f", 00:16:47.876 "superblock": false, 00:16:47.876 "num_base_bdevs": 3, 00:16:47.876 "num_base_bdevs_discovered": 3, 00:16:47.876 "num_base_bdevs_operational": 3, 00:16:47.876 "process": { 00:16:47.876 "type": "rebuild", 00:16:47.876 "target": "spare", 00:16:47.876 "progress": { 00:16:47.876 "blocks": 18432, 00:16:47.876 "percent": 14 00:16:47.876 } 00:16:47.876 }, 00:16:47.876 "base_bdevs_list": [ 00:16:47.876 { 00:16:47.876 "name": "spare", 00:16:47.876 "uuid": "1705c871-dfe8-565b-b84f-530a4ef6df81", 00:16:47.876 "is_configured": true, 00:16:47.876 "data_offset": 0, 00:16:47.876 "data_size": 65536 00:16:47.876 }, 00:16:47.876 { 00:16:47.876 "name": "BaseBdev2", 00:16:47.876 "uuid": "bacc63e3-bd1b-5dd3-aba3-805e1ca615d1", 00:16:47.876 "is_configured": true, 00:16:47.876 "data_offset": 0, 00:16:47.876 "data_size": 65536 00:16:47.876 }, 00:16:47.876 { 00:16:47.876 "name": "BaseBdev3", 00:16:47.876 "uuid": "7cf08fd2-e07b-575b-b420-6aad0ef49e23", 00:16:47.876 "is_configured": true, 00:16:47.876 "data_offset": 0, 00:16:47.876 "data_size": 65536 00:16:47.876 } 00:16:47.876 ] 00:16:47.876 }' 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=612 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.876 19:06:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.877 19:06:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.877 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.877 "name": "raid_bdev1", 00:16:47.877 "uuid": "d2b9a5d9-95a0-4e1d-8b5e-4f0c7e74f920", 00:16:47.877 "strip_size_kb": 64, 00:16:47.877 "state": "online", 00:16:47.877 "raid_level": "raid5f", 00:16:47.877 "superblock": false, 00:16:47.877 "num_base_bdevs": 3, 00:16:47.877 "num_base_bdevs_discovered": 3, 00:16:47.877 "num_base_bdevs_operational": 3, 00:16:47.877 "process": { 00:16:47.877 "type": "rebuild", 00:16:47.877 "target": "spare", 00:16:47.877 "progress": { 00:16:47.877 "blocks": 22528, 00:16:47.877 "percent": 17 00:16:47.877 } 00:16:47.877 }, 00:16:47.877 "base_bdevs_list": [ 00:16:47.877 { 00:16:47.877 "name": "spare", 00:16:47.877 "uuid": "1705c871-dfe8-565b-b84f-530a4ef6df81", 00:16:47.877 "is_configured": true, 00:16:47.877 "data_offset": 0, 00:16:47.877 "data_size": 65536 00:16:47.877 }, 00:16:47.877 { 00:16:47.877 "name": "BaseBdev2", 00:16:47.877 "uuid": "bacc63e3-bd1b-5dd3-aba3-805e1ca615d1", 00:16:47.877 "is_configured": true, 00:16:47.877 "data_offset": 0, 00:16:47.877 "data_size": 65536 00:16:47.877 }, 00:16:47.877 { 00:16:47.877 "name": "BaseBdev3", 00:16:47.877 "uuid": "7cf08fd2-e07b-575b-b420-6aad0ef49e23", 00:16:47.877 "is_configured": true, 00:16:47.877 "data_offset": 0, 00:16:47.877 "data_size": 65536 00:16:47.877 } 00:16:47.877 ] 00:16:47.877 }' 00:16:47.877 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.877 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.877 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.877 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.877 19:06:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:48.820 19:06:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:48.820 19:06:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.820 19:06:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.820 19:06:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.820 19:06:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.820 19:06:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.820 19:06:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.820 19:06:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.820 19:06:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.820 19:06:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.820 19:06:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.080 19:06:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.080 "name": "raid_bdev1", 00:16:49.080 "uuid": "d2b9a5d9-95a0-4e1d-8b5e-4f0c7e74f920", 00:16:49.080 "strip_size_kb": 64, 00:16:49.080 "state": "online", 00:16:49.080 "raid_level": "raid5f", 00:16:49.080 "superblock": false, 00:16:49.080 "num_base_bdevs": 3, 00:16:49.080 "num_base_bdevs_discovered": 3, 00:16:49.080 "num_base_bdevs_operational": 3, 00:16:49.080 "process": { 00:16:49.080 "type": "rebuild", 00:16:49.080 "target": "spare", 00:16:49.080 "progress": { 00:16:49.080 "blocks": 45056, 00:16:49.080 "percent": 34 00:16:49.080 } 00:16:49.080 }, 00:16:49.080 "base_bdevs_list": [ 00:16:49.080 { 00:16:49.080 "name": "spare", 00:16:49.080 "uuid": "1705c871-dfe8-565b-b84f-530a4ef6df81", 00:16:49.080 "is_configured": true, 00:16:49.080 "data_offset": 0, 00:16:49.080 "data_size": 65536 00:16:49.080 }, 00:16:49.080 { 00:16:49.080 "name": "BaseBdev2", 00:16:49.080 "uuid": "bacc63e3-bd1b-5dd3-aba3-805e1ca615d1", 00:16:49.080 "is_configured": true, 00:16:49.080 "data_offset": 0, 00:16:49.080 "data_size": 65536 00:16:49.080 }, 00:16:49.080 { 00:16:49.080 "name": "BaseBdev3", 00:16:49.080 "uuid": "7cf08fd2-e07b-575b-b420-6aad0ef49e23", 00:16:49.080 "is_configured": true, 00:16:49.080 "data_offset": 0, 00:16:49.080 "data_size": 65536 00:16:49.080 } 00:16:49.080 ] 00:16:49.080 }' 00:16:49.080 19:06:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.080 19:06:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:49.080 19:06:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.080 19:06:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.080 19:06:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:50.016 19:06:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:50.016 19:06:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.016 19:06:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.016 19:06:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.016 19:06:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.016 19:06:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.016 19:06:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.016 19:06:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.016 19:06:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.016 19:06:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.016 19:06:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.016 19:06:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.016 "name": "raid_bdev1", 00:16:50.016 "uuid": "d2b9a5d9-95a0-4e1d-8b5e-4f0c7e74f920", 00:16:50.016 "strip_size_kb": 64, 00:16:50.016 "state": "online", 00:16:50.016 "raid_level": "raid5f", 00:16:50.016 "superblock": false, 00:16:50.016 "num_base_bdevs": 3, 00:16:50.016 "num_base_bdevs_discovered": 3, 00:16:50.016 "num_base_bdevs_operational": 3, 00:16:50.016 "process": { 00:16:50.016 "type": "rebuild", 00:16:50.016 "target": "spare", 00:16:50.016 "progress": { 00:16:50.016 "blocks": 69632, 00:16:50.016 "percent": 53 00:16:50.016 } 00:16:50.016 }, 00:16:50.016 "base_bdevs_list": [ 00:16:50.016 { 00:16:50.016 "name": "spare", 00:16:50.016 "uuid": "1705c871-dfe8-565b-b84f-530a4ef6df81", 00:16:50.016 "is_configured": true, 00:16:50.016 "data_offset": 0, 00:16:50.016 "data_size": 65536 00:16:50.016 }, 00:16:50.016 { 00:16:50.016 "name": "BaseBdev2", 00:16:50.016 "uuid": "bacc63e3-bd1b-5dd3-aba3-805e1ca615d1", 00:16:50.016 "is_configured": true, 00:16:50.016 "data_offset": 0, 00:16:50.016 "data_size": 65536 00:16:50.016 }, 00:16:50.016 { 00:16:50.016 "name": "BaseBdev3", 00:16:50.016 "uuid": "7cf08fd2-e07b-575b-b420-6aad0ef49e23", 00:16:50.016 "is_configured": true, 00:16:50.016 "data_offset": 0, 00:16:50.016 "data_size": 65536 00:16:50.016 } 00:16:50.016 ] 00:16:50.016 }' 00:16:50.016 19:06:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.274 19:06:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.274 19:06:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.274 19:06:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.274 19:06:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:51.210 19:06:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:51.210 19:06:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.210 19:06:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.210 19:06:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.210 19:06:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.210 19:06:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.210 19:06:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.210 19:06:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.210 19:06:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.210 19:06:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.210 19:06:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.210 19:06:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.210 "name": "raid_bdev1", 00:16:51.210 "uuid": "d2b9a5d9-95a0-4e1d-8b5e-4f0c7e74f920", 00:16:51.210 "strip_size_kb": 64, 00:16:51.210 "state": "online", 00:16:51.210 "raid_level": "raid5f", 00:16:51.210 "superblock": false, 00:16:51.210 "num_base_bdevs": 3, 00:16:51.210 "num_base_bdevs_discovered": 3, 00:16:51.210 "num_base_bdevs_operational": 3, 00:16:51.210 "process": { 00:16:51.210 "type": "rebuild", 00:16:51.210 "target": "spare", 00:16:51.210 "progress": { 00:16:51.210 "blocks": 92160, 00:16:51.210 "percent": 70 00:16:51.210 } 00:16:51.210 }, 00:16:51.210 "base_bdevs_list": [ 00:16:51.210 { 00:16:51.210 "name": "spare", 00:16:51.210 "uuid": "1705c871-dfe8-565b-b84f-530a4ef6df81", 00:16:51.210 "is_configured": true, 00:16:51.210 "data_offset": 0, 00:16:51.211 "data_size": 65536 00:16:51.211 }, 00:16:51.211 { 00:16:51.211 "name": "BaseBdev2", 00:16:51.211 "uuid": "bacc63e3-bd1b-5dd3-aba3-805e1ca615d1", 00:16:51.211 "is_configured": true, 00:16:51.211 "data_offset": 0, 00:16:51.211 "data_size": 65536 00:16:51.211 }, 00:16:51.211 { 00:16:51.211 "name": "BaseBdev3", 00:16:51.211 "uuid": "7cf08fd2-e07b-575b-b420-6aad0ef49e23", 00:16:51.211 "is_configured": true, 00:16:51.211 "data_offset": 0, 00:16:51.211 "data_size": 65536 00:16:51.211 } 00:16:51.211 ] 00:16:51.211 }' 00:16:51.211 19:06:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.211 19:06:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.211 19:06:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.469 19:06:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.469 19:06:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:52.404 19:06:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:52.404 19:06:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.404 19:06:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.404 19:06:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.404 19:06:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.404 19:06:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.404 19:06:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.404 19:06:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.404 19:06:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.404 19:06:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.404 19:06:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.404 19:06:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.404 "name": "raid_bdev1", 00:16:52.404 "uuid": "d2b9a5d9-95a0-4e1d-8b5e-4f0c7e74f920", 00:16:52.404 "strip_size_kb": 64, 00:16:52.404 "state": "online", 00:16:52.404 "raid_level": "raid5f", 00:16:52.404 "superblock": false, 00:16:52.404 "num_base_bdevs": 3, 00:16:52.404 "num_base_bdevs_discovered": 3, 00:16:52.404 "num_base_bdevs_operational": 3, 00:16:52.404 "process": { 00:16:52.404 "type": "rebuild", 00:16:52.404 "target": "spare", 00:16:52.404 "progress": { 00:16:52.404 "blocks": 114688, 00:16:52.404 "percent": 87 00:16:52.404 } 00:16:52.404 }, 00:16:52.404 "base_bdevs_list": [ 00:16:52.404 { 00:16:52.404 "name": "spare", 00:16:52.404 "uuid": "1705c871-dfe8-565b-b84f-530a4ef6df81", 00:16:52.404 "is_configured": true, 00:16:52.404 "data_offset": 0, 00:16:52.404 "data_size": 65536 00:16:52.404 }, 00:16:52.404 { 00:16:52.404 "name": "BaseBdev2", 00:16:52.404 "uuid": "bacc63e3-bd1b-5dd3-aba3-805e1ca615d1", 00:16:52.404 "is_configured": true, 00:16:52.404 "data_offset": 0, 00:16:52.404 "data_size": 65536 00:16:52.404 }, 00:16:52.404 { 00:16:52.404 "name": "BaseBdev3", 00:16:52.404 "uuid": "7cf08fd2-e07b-575b-b420-6aad0ef49e23", 00:16:52.404 "is_configured": true, 00:16:52.404 "data_offset": 0, 00:16:52.404 "data_size": 65536 00:16:52.404 } 00:16:52.404 ] 00:16:52.404 }' 00:16:52.404 19:06:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.404 19:06:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.404 19:06:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.404 19:06:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.404 19:06:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:53.339 [2024-11-26 19:06:19.598369] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:53.339 [2024-11-26 19:06:19.598500] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:53.339 [2024-11-26 19:06:19.598558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.598 "name": "raid_bdev1", 00:16:53.598 "uuid": "d2b9a5d9-95a0-4e1d-8b5e-4f0c7e74f920", 00:16:53.598 "strip_size_kb": 64, 00:16:53.598 "state": "online", 00:16:53.598 "raid_level": "raid5f", 00:16:53.598 "superblock": false, 00:16:53.598 "num_base_bdevs": 3, 00:16:53.598 "num_base_bdevs_discovered": 3, 00:16:53.598 "num_base_bdevs_operational": 3, 00:16:53.598 "base_bdevs_list": [ 00:16:53.598 { 00:16:53.598 "name": "spare", 00:16:53.598 "uuid": "1705c871-dfe8-565b-b84f-530a4ef6df81", 00:16:53.598 "is_configured": true, 00:16:53.598 "data_offset": 0, 00:16:53.598 "data_size": 65536 00:16:53.598 }, 00:16:53.598 { 00:16:53.598 "name": "BaseBdev2", 00:16:53.598 "uuid": "bacc63e3-bd1b-5dd3-aba3-805e1ca615d1", 00:16:53.598 "is_configured": true, 00:16:53.598 "data_offset": 0, 00:16:53.598 "data_size": 65536 00:16:53.598 }, 00:16:53.598 { 00:16:53.598 "name": "BaseBdev3", 00:16:53.598 "uuid": "7cf08fd2-e07b-575b-b420-6aad0ef49e23", 00:16:53.598 "is_configured": true, 00:16:53.598 "data_offset": 0, 00:16:53.598 "data_size": 65536 00:16:53.598 } 00:16:53.598 ] 00:16:53.598 }' 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.598 19:06:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.858 "name": "raid_bdev1", 00:16:53.858 "uuid": "d2b9a5d9-95a0-4e1d-8b5e-4f0c7e74f920", 00:16:53.858 "strip_size_kb": 64, 00:16:53.858 "state": "online", 00:16:53.858 "raid_level": "raid5f", 00:16:53.858 "superblock": false, 00:16:53.858 "num_base_bdevs": 3, 00:16:53.858 "num_base_bdevs_discovered": 3, 00:16:53.858 "num_base_bdevs_operational": 3, 00:16:53.858 "base_bdevs_list": [ 00:16:53.858 { 00:16:53.858 "name": "spare", 00:16:53.858 "uuid": "1705c871-dfe8-565b-b84f-530a4ef6df81", 00:16:53.858 "is_configured": true, 00:16:53.858 "data_offset": 0, 00:16:53.858 "data_size": 65536 00:16:53.858 }, 00:16:53.858 { 00:16:53.858 "name": "BaseBdev2", 00:16:53.858 "uuid": "bacc63e3-bd1b-5dd3-aba3-805e1ca615d1", 00:16:53.858 "is_configured": true, 00:16:53.858 "data_offset": 0, 00:16:53.858 "data_size": 65536 00:16:53.858 }, 00:16:53.858 { 00:16:53.858 "name": "BaseBdev3", 00:16:53.858 "uuid": "7cf08fd2-e07b-575b-b420-6aad0ef49e23", 00:16:53.858 "is_configured": true, 00:16:53.858 "data_offset": 0, 00:16:53.858 "data_size": 65536 00:16:53.858 } 00:16:53.858 ] 00:16:53.858 }' 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.858 "name": "raid_bdev1", 00:16:53.858 "uuid": "d2b9a5d9-95a0-4e1d-8b5e-4f0c7e74f920", 00:16:53.858 "strip_size_kb": 64, 00:16:53.858 "state": "online", 00:16:53.858 "raid_level": "raid5f", 00:16:53.858 "superblock": false, 00:16:53.858 "num_base_bdevs": 3, 00:16:53.858 "num_base_bdevs_discovered": 3, 00:16:53.858 "num_base_bdevs_operational": 3, 00:16:53.858 "base_bdevs_list": [ 00:16:53.858 { 00:16:53.858 "name": "spare", 00:16:53.858 "uuid": "1705c871-dfe8-565b-b84f-530a4ef6df81", 00:16:53.858 "is_configured": true, 00:16:53.858 "data_offset": 0, 00:16:53.858 "data_size": 65536 00:16:53.858 }, 00:16:53.858 { 00:16:53.858 "name": "BaseBdev2", 00:16:53.858 "uuid": "bacc63e3-bd1b-5dd3-aba3-805e1ca615d1", 00:16:53.858 "is_configured": true, 00:16:53.858 "data_offset": 0, 00:16:53.858 "data_size": 65536 00:16:53.858 }, 00:16:53.858 { 00:16:53.858 "name": "BaseBdev3", 00:16:53.858 "uuid": "7cf08fd2-e07b-575b-b420-6aad0ef49e23", 00:16:53.858 "is_configured": true, 00:16:53.858 "data_offset": 0, 00:16:53.858 "data_size": 65536 00:16:53.858 } 00:16:53.858 ] 00:16:53.858 }' 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.858 19:06:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.425 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:54.425 19:06:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.425 19:06:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.425 [2024-11-26 19:06:20.884805] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.425 [2024-11-26 19:06:20.884887] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.425 [2024-11-26 19:06:20.885049] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.425 [2024-11-26 19:06:20.885200] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.425 [2024-11-26 19:06:20.885252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:54.425 19:06:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.425 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.425 19:06:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.425 19:06:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.425 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:54.425 19:06:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.425 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:54.425 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:54.425 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:54.425 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:54.425 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:54.425 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:54.425 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:54.425 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:54.425 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:54.425 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:54.425 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:54.425 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:54.425 19:06:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:54.684 /dev/nbd0 00:16:54.684 19:06:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:54.684 19:06:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:54.684 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:54.684 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:54.684 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:54.684 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:54.684 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:54.684 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:54.685 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:54.685 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:54.685 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:54.685 1+0 records in 00:16:54.685 1+0 records out 00:16:54.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357422 s, 11.5 MB/s 00:16:54.685 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.685 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:54.685 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.685 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:54.685 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:54.685 19:06:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:54.685 19:06:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:54.685 19:06:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:55.252 /dev/nbd1 00:16:55.252 19:06:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:55.252 19:06:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:55.252 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:55.252 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:55.253 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:55.253 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:55.253 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:55.253 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:55.253 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:55.253 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:55.253 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:55.253 1+0 records in 00:16:55.253 1+0 records out 00:16:55.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459451 s, 8.9 MB/s 00:16:55.253 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.253 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:55.253 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.253 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:55.253 19:06:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:55.253 19:06:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:55.253 19:06:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:55.253 19:06:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:55.253 19:06:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:55.253 19:06:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:55.253 19:06:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:55.253 19:06:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:55.253 19:06:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:55.253 19:06:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:55.253 19:06:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:55.511 19:06:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:55.511 19:06:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:55.511 19:06:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:55.511 19:06:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:55.511 19:06:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:55.511 19:06:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:55.511 19:06:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:55.511 19:06:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:55.511 19:06:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:55.511 19:06:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:55.770 19:06:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:55.770 19:06:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:55.770 19:06:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:55.770 19:06:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:55.770 19:06:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:55.770 19:06:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:55.770 19:06:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:55.770 19:06:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:55.770 19:06:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:55.770 19:06:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82454 00:16:55.770 19:06:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 82454 ']' 00:16:55.770 19:06:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 82454 00:16:55.770 19:06:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:55.770 19:06:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.770 19:06:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82454 00:16:55.770 killing process with pid 82454 00:16:55.771 Received shutdown signal, test time was about 60.000000 seconds 00:16:55.771 00:16:55.771 Latency(us) 00:16:55.771 [2024-11-26T19:06:22.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.771 [2024-11-26T19:06:22.394Z] =================================================================================================================== 00:16:55.771 [2024-11-26T19:06:22.394Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:55.771 19:06:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.771 19:06:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.771 19:06:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82454' 00:16:55.771 19:06:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 82454 00:16:55.771 19:06:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 82454 00:16:55.771 [2024-11-26 19:06:22.348956] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:56.338 [2024-11-26 19:06:22.725369] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:57.275 19:06:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:57.275 00:16:57.275 real 0m16.474s 00:16:57.275 user 0m20.872s 00:16:57.275 sys 0m2.150s 00:16:57.275 19:06:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.275 ************************************ 00:16:57.275 END TEST raid5f_rebuild_test 00:16:57.275 ************************************ 00:16:57.275 19:06:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.534 19:06:23 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:16:57.534 19:06:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:57.534 19:06:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.535 19:06:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:57.535 ************************************ 00:16:57.535 START TEST raid5f_rebuild_test_sb 00:16:57.535 ************************************ 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82896 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82896 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82896 ']' 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.535 19:06:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.535 [2024-11-26 19:06:24.062840] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:16:57.535 [2024-11-26 19:06:24.063304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82896 ] 00:16:57.535 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:57.535 Zero copy mechanism will not be used. 00:16:57.794 [2024-11-26 19:06:24.253063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.794 [2024-11-26 19:06:24.407640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.052 [2024-11-26 19:06:24.644494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:58.052 [2024-11-26 19:06:24.644819] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:58.620 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.620 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:58.620 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:58.620 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:58.620 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.620 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.620 BaseBdev1_malloc 00:16:58.620 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.620 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:58.620 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.620 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.620 [2024-11-26 19:06:25.110223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:58.620 [2024-11-26 19:06:25.110372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.620 [2024-11-26 19:06:25.110406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:58.620 [2024-11-26 19:06:25.110426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.620 [2024-11-26 19:06:25.113537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.621 [2024-11-26 19:06:25.113590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:58.621 BaseBdev1 00:16:58.621 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.621 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:58.621 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:58.621 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.621 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.621 BaseBdev2_malloc 00:16:58.621 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.621 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:58.621 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.621 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.621 [2024-11-26 19:06:25.170530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:58.621 [2024-11-26 19:06:25.170754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.621 [2024-11-26 19:06:25.170831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:58.621 [2024-11-26 19:06:25.171051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.621 [2024-11-26 19:06:25.174067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.621 [2024-11-26 19:06:25.174117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:58.621 BaseBdev2 00:16:58.621 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.621 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:58.621 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:58.621 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.621 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.621 BaseBdev3_malloc 00:16:58.621 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.621 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:58.621 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.621 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.621 [2024-11-26 19:06:25.238541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:58.621 [2024-11-26 19:06:25.238758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.621 [2024-11-26 19:06:25.238903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:58.621 [2024-11-26 19:06:25.239018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.881 [2024-11-26 19:06:25.242148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.881 BaseBdev3 00:16:58.881 [2024-11-26 19:06:25.242358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.881 spare_malloc 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.881 spare_delay 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.881 [2024-11-26 19:06:25.304104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:58.881 [2024-11-26 19:06:25.304181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.881 [2024-11-26 19:06:25.304209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:58.881 [2024-11-26 19:06:25.304226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.881 [2024-11-26 19:06:25.307411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.881 [2024-11-26 19:06:25.307462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:58.881 spare 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.881 [2024-11-26 19:06:25.312312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.881 [2024-11-26 19:06:25.315005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:58.881 [2024-11-26 19:06:25.315268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:58.881 [2024-11-26 19:06:25.315541] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:58.881 [2024-11-26 19:06:25.315561] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:58.881 [2024-11-26 19:06:25.315941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:58.881 [2024-11-26 19:06:25.321501] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:58.881 [2024-11-26 19:06:25.321665] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:58.881 [2024-11-26 19:06:25.322038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.881 "name": "raid_bdev1", 00:16:58.881 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:16:58.881 "strip_size_kb": 64, 00:16:58.881 "state": "online", 00:16:58.881 "raid_level": "raid5f", 00:16:58.881 "superblock": true, 00:16:58.881 "num_base_bdevs": 3, 00:16:58.881 "num_base_bdevs_discovered": 3, 00:16:58.881 "num_base_bdevs_operational": 3, 00:16:58.881 "base_bdevs_list": [ 00:16:58.881 { 00:16:58.881 "name": "BaseBdev1", 00:16:58.881 "uuid": "fec4b41c-41b4-5acc-a8de-e6eb129040a8", 00:16:58.881 "is_configured": true, 00:16:58.881 "data_offset": 2048, 00:16:58.881 "data_size": 63488 00:16:58.881 }, 00:16:58.881 { 00:16:58.881 "name": "BaseBdev2", 00:16:58.881 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:16:58.881 "is_configured": true, 00:16:58.881 "data_offset": 2048, 00:16:58.881 "data_size": 63488 00:16:58.881 }, 00:16:58.881 { 00:16:58.881 "name": "BaseBdev3", 00:16:58.881 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:16:58.881 "is_configured": true, 00:16:58.881 "data_offset": 2048, 00:16:58.881 "data_size": 63488 00:16:58.881 } 00:16:58.881 ] 00:16:58.881 }' 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.881 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:59.448 [2024-11-26 19:06:25.836782] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:59.448 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:59.449 19:06:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:59.708 [2024-11-26 19:06:26.176652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:59.708 /dev/nbd0 00:16:59.708 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:59.708 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:59.708 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:59.708 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:59.708 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:59.708 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:59.708 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:59.708 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:59.708 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:59.708 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:59.708 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:59.708 1+0 records in 00:16:59.708 1+0 records out 00:16:59.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343105 s, 11.9 MB/s 00:16:59.708 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:59.708 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:59.708 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:59.708 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:59.708 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:59.708 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:59.708 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:59.708 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:59.708 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:59.708 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:59.708 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:17:00.316 496+0 records in 00:17:00.316 496+0 records out 00:17:00.316 65011712 bytes (65 MB, 62 MiB) copied, 0.47147 s, 138 MB/s 00:17:00.316 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:00.316 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:00.316 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:00.316 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:00.316 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:00.316 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:00.316 19:06:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:00.573 [2024-11-26 19:06:26.998801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.573 [2024-11-26 19:06:27.033402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.573 "name": "raid_bdev1", 00:17:00.573 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:00.573 "strip_size_kb": 64, 00:17:00.573 "state": "online", 00:17:00.573 "raid_level": "raid5f", 00:17:00.573 "superblock": true, 00:17:00.573 "num_base_bdevs": 3, 00:17:00.573 "num_base_bdevs_discovered": 2, 00:17:00.573 "num_base_bdevs_operational": 2, 00:17:00.573 "base_bdevs_list": [ 00:17:00.573 { 00:17:00.573 "name": null, 00:17:00.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.573 "is_configured": false, 00:17:00.573 "data_offset": 0, 00:17:00.573 "data_size": 63488 00:17:00.573 }, 00:17:00.573 { 00:17:00.573 "name": "BaseBdev2", 00:17:00.573 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:00.573 "is_configured": true, 00:17:00.573 "data_offset": 2048, 00:17:00.573 "data_size": 63488 00:17:00.573 }, 00:17:00.573 { 00:17:00.573 "name": "BaseBdev3", 00:17:00.573 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:00.573 "is_configured": true, 00:17:00.573 "data_offset": 2048, 00:17:00.573 "data_size": 63488 00:17:00.573 } 00:17:00.573 ] 00:17:00.573 }' 00:17:00.573 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.574 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.140 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:01.140 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.140 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.140 [2024-11-26 19:06:27.521553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:01.140 [2024-11-26 19:06:27.537879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:17:01.140 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.140 19:06:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:01.140 [2024-11-26 19:06:27.545484] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:02.076 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.076 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.076 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.076 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.076 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.076 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.076 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.076 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.076 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.076 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.076 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.076 "name": "raid_bdev1", 00:17:02.076 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:02.076 "strip_size_kb": 64, 00:17:02.076 "state": "online", 00:17:02.076 "raid_level": "raid5f", 00:17:02.076 "superblock": true, 00:17:02.076 "num_base_bdevs": 3, 00:17:02.076 "num_base_bdevs_discovered": 3, 00:17:02.076 "num_base_bdevs_operational": 3, 00:17:02.076 "process": { 00:17:02.076 "type": "rebuild", 00:17:02.076 "target": "spare", 00:17:02.076 "progress": { 00:17:02.076 "blocks": 18432, 00:17:02.076 "percent": 14 00:17:02.076 } 00:17:02.076 }, 00:17:02.076 "base_bdevs_list": [ 00:17:02.076 { 00:17:02.076 "name": "spare", 00:17:02.076 "uuid": "d8c33a1a-03e8-5500-b1b6-eb1b7381776f", 00:17:02.076 "is_configured": true, 00:17:02.076 "data_offset": 2048, 00:17:02.076 "data_size": 63488 00:17:02.076 }, 00:17:02.076 { 00:17:02.076 "name": "BaseBdev2", 00:17:02.076 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:02.076 "is_configured": true, 00:17:02.076 "data_offset": 2048, 00:17:02.076 "data_size": 63488 00:17:02.076 }, 00:17:02.076 { 00:17:02.076 "name": "BaseBdev3", 00:17:02.076 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:02.076 "is_configured": true, 00:17:02.076 "data_offset": 2048, 00:17:02.076 "data_size": 63488 00:17:02.076 } 00:17:02.076 ] 00:17:02.076 }' 00:17:02.076 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.076 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:02.076 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.334 [2024-11-26 19:06:28.707259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:02.334 [2024-11-26 19:06:28.763261] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:02.334 [2024-11-26 19:06:28.763722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.334 [2024-11-26 19:06:28.763924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:02.334 [2024-11-26 19:06:28.763950] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.334 "name": "raid_bdev1", 00:17:02.334 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:02.334 "strip_size_kb": 64, 00:17:02.334 "state": "online", 00:17:02.334 "raid_level": "raid5f", 00:17:02.334 "superblock": true, 00:17:02.334 "num_base_bdevs": 3, 00:17:02.334 "num_base_bdevs_discovered": 2, 00:17:02.334 "num_base_bdevs_operational": 2, 00:17:02.334 "base_bdevs_list": [ 00:17:02.334 { 00:17:02.334 "name": null, 00:17:02.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.334 "is_configured": false, 00:17:02.334 "data_offset": 0, 00:17:02.334 "data_size": 63488 00:17:02.334 }, 00:17:02.334 { 00:17:02.334 "name": "BaseBdev2", 00:17:02.334 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:02.334 "is_configured": true, 00:17:02.334 "data_offset": 2048, 00:17:02.334 "data_size": 63488 00:17:02.334 }, 00:17:02.334 { 00:17:02.334 "name": "BaseBdev3", 00:17:02.334 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:02.334 "is_configured": true, 00:17:02.334 "data_offset": 2048, 00:17:02.334 "data_size": 63488 00:17:02.334 } 00:17:02.334 ] 00:17:02.334 }' 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.334 19:06:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.901 19:06:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:02.901 19:06:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.901 19:06:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:02.901 19:06:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:02.901 19:06:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.901 19:06:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.901 19:06:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.901 19:06:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.901 19:06:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.901 19:06:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.901 19:06:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.901 "name": "raid_bdev1", 00:17:02.901 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:02.901 "strip_size_kb": 64, 00:17:02.901 "state": "online", 00:17:02.901 "raid_level": "raid5f", 00:17:02.901 "superblock": true, 00:17:02.901 "num_base_bdevs": 3, 00:17:02.901 "num_base_bdevs_discovered": 2, 00:17:02.901 "num_base_bdevs_operational": 2, 00:17:02.901 "base_bdevs_list": [ 00:17:02.901 { 00:17:02.901 "name": null, 00:17:02.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.901 "is_configured": false, 00:17:02.901 "data_offset": 0, 00:17:02.901 "data_size": 63488 00:17:02.901 }, 00:17:02.901 { 00:17:02.901 "name": "BaseBdev2", 00:17:02.901 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:02.901 "is_configured": true, 00:17:02.901 "data_offset": 2048, 00:17:02.901 "data_size": 63488 00:17:02.901 }, 00:17:02.901 { 00:17:02.901 "name": "BaseBdev3", 00:17:02.901 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:02.901 "is_configured": true, 00:17:02.901 "data_offset": 2048, 00:17:02.901 "data_size": 63488 00:17:02.901 } 00:17:02.901 ] 00:17:02.901 }' 00:17:02.901 19:06:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.901 19:06:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:02.901 19:06:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.901 19:06:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:02.901 19:06:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:02.901 19:06:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.901 19:06:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.901 [2024-11-26 19:06:29.515744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:03.158 [2024-11-26 19:06:29.531818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:17:03.158 19:06:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.158 19:06:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:03.158 [2024-11-26 19:06:29.539841] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.091 "name": "raid_bdev1", 00:17:04.091 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:04.091 "strip_size_kb": 64, 00:17:04.091 "state": "online", 00:17:04.091 "raid_level": "raid5f", 00:17:04.091 "superblock": true, 00:17:04.091 "num_base_bdevs": 3, 00:17:04.091 "num_base_bdevs_discovered": 3, 00:17:04.091 "num_base_bdevs_operational": 3, 00:17:04.091 "process": { 00:17:04.091 "type": "rebuild", 00:17:04.091 "target": "spare", 00:17:04.091 "progress": { 00:17:04.091 "blocks": 18432, 00:17:04.091 "percent": 14 00:17:04.091 } 00:17:04.091 }, 00:17:04.091 "base_bdevs_list": [ 00:17:04.091 { 00:17:04.091 "name": "spare", 00:17:04.091 "uuid": "d8c33a1a-03e8-5500-b1b6-eb1b7381776f", 00:17:04.091 "is_configured": true, 00:17:04.091 "data_offset": 2048, 00:17:04.091 "data_size": 63488 00:17:04.091 }, 00:17:04.091 { 00:17:04.091 "name": "BaseBdev2", 00:17:04.091 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:04.091 "is_configured": true, 00:17:04.091 "data_offset": 2048, 00:17:04.091 "data_size": 63488 00:17:04.091 }, 00:17:04.091 { 00:17:04.091 "name": "BaseBdev3", 00:17:04.091 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:04.091 "is_configured": true, 00:17:04.091 "data_offset": 2048, 00:17:04.091 "data_size": 63488 00:17:04.091 } 00:17:04.091 ] 00:17:04.091 }' 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:04.091 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=628 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.091 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.349 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.349 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.349 "name": "raid_bdev1", 00:17:04.349 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:04.349 "strip_size_kb": 64, 00:17:04.349 "state": "online", 00:17:04.349 "raid_level": "raid5f", 00:17:04.349 "superblock": true, 00:17:04.349 "num_base_bdevs": 3, 00:17:04.349 "num_base_bdevs_discovered": 3, 00:17:04.349 "num_base_bdevs_operational": 3, 00:17:04.349 "process": { 00:17:04.349 "type": "rebuild", 00:17:04.349 "target": "spare", 00:17:04.349 "progress": { 00:17:04.349 "blocks": 22528, 00:17:04.349 "percent": 17 00:17:04.349 } 00:17:04.349 }, 00:17:04.349 "base_bdevs_list": [ 00:17:04.349 { 00:17:04.349 "name": "spare", 00:17:04.349 "uuid": "d8c33a1a-03e8-5500-b1b6-eb1b7381776f", 00:17:04.349 "is_configured": true, 00:17:04.349 "data_offset": 2048, 00:17:04.349 "data_size": 63488 00:17:04.349 }, 00:17:04.349 { 00:17:04.349 "name": "BaseBdev2", 00:17:04.349 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:04.349 "is_configured": true, 00:17:04.349 "data_offset": 2048, 00:17:04.349 "data_size": 63488 00:17:04.349 }, 00:17:04.349 { 00:17:04.349 "name": "BaseBdev3", 00:17:04.349 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:04.349 "is_configured": true, 00:17:04.349 "data_offset": 2048, 00:17:04.349 "data_size": 63488 00:17:04.349 } 00:17:04.349 ] 00:17:04.349 }' 00:17:04.349 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.349 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.349 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.349 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.349 19:06:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:05.284 19:06:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:05.284 19:06:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.284 19:06:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.284 19:06:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.284 19:06:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.284 19:06:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.284 19:06:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.284 19:06:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.284 19:06:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.284 19:06:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.284 19:06:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.284 19:06:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.284 "name": "raid_bdev1", 00:17:05.284 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:05.284 "strip_size_kb": 64, 00:17:05.284 "state": "online", 00:17:05.284 "raid_level": "raid5f", 00:17:05.284 "superblock": true, 00:17:05.284 "num_base_bdevs": 3, 00:17:05.284 "num_base_bdevs_discovered": 3, 00:17:05.284 "num_base_bdevs_operational": 3, 00:17:05.284 "process": { 00:17:05.284 "type": "rebuild", 00:17:05.284 "target": "spare", 00:17:05.284 "progress": { 00:17:05.284 "blocks": 45056, 00:17:05.284 "percent": 35 00:17:05.284 } 00:17:05.284 }, 00:17:05.284 "base_bdevs_list": [ 00:17:05.284 { 00:17:05.284 "name": "spare", 00:17:05.284 "uuid": "d8c33a1a-03e8-5500-b1b6-eb1b7381776f", 00:17:05.284 "is_configured": true, 00:17:05.284 "data_offset": 2048, 00:17:05.284 "data_size": 63488 00:17:05.284 }, 00:17:05.284 { 00:17:05.284 "name": "BaseBdev2", 00:17:05.284 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:05.284 "is_configured": true, 00:17:05.284 "data_offset": 2048, 00:17:05.284 "data_size": 63488 00:17:05.284 }, 00:17:05.284 { 00:17:05.284 "name": "BaseBdev3", 00:17:05.284 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:05.284 "is_configured": true, 00:17:05.284 "data_offset": 2048, 00:17:05.284 "data_size": 63488 00:17:05.284 } 00:17:05.284 ] 00:17:05.284 }' 00:17:05.543 19:06:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.543 19:06:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.543 19:06:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.543 19:06:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.543 19:06:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:06.479 19:06:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:06.479 19:06:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.479 19:06:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.479 19:06:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.479 19:06:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.479 19:06:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.479 19:06:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.479 19:06:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.479 19:06:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.479 19:06:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.479 19:06:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.479 19:06:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.479 "name": "raid_bdev1", 00:17:06.479 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:06.479 "strip_size_kb": 64, 00:17:06.479 "state": "online", 00:17:06.479 "raid_level": "raid5f", 00:17:06.479 "superblock": true, 00:17:06.479 "num_base_bdevs": 3, 00:17:06.479 "num_base_bdevs_discovered": 3, 00:17:06.479 "num_base_bdevs_operational": 3, 00:17:06.479 "process": { 00:17:06.479 "type": "rebuild", 00:17:06.479 "target": "spare", 00:17:06.479 "progress": { 00:17:06.479 "blocks": 69632, 00:17:06.479 "percent": 54 00:17:06.479 } 00:17:06.479 }, 00:17:06.479 "base_bdevs_list": [ 00:17:06.479 { 00:17:06.479 "name": "spare", 00:17:06.479 "uuid": "d8c33a1a-03e8-5500-b1b6-eb1b7381776f", 00:17:06.479 "is_configured": true, 00:17:06.479 "data_offset": 2048, 00:17:06.479 "data_size": 63488 00:17:06.479 }, 00:17:06.479 { 00:17:06.479 "name": "BaseBdev2", 00:17:06.479 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:06.479 "is_configured": true, 00:17:06.479 "data_offset": 2048, 00:17:06.479 "data_size": 63488 00:17:06.479 }, 00:17:06.479 { 00:17:06.479 "name": "BaseBdev3", 00:17:06.479 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:06.479 "is_configured": true, 00:17:06.479 "data_offset": 2048, 00:17:06.479 "data_size": 63488 00:17:06.479 } 00:17:06.479 ] 00:17:06.479 }' 00:17:06.479 19:06:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.737 19:06:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.737 19:06:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.737 19:06:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.737 19:06:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:07.669 19:06:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:07.669 19:06:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.669 19:06:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.669 19:06:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:07.669 19:06:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:07.669 19:06:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.669 19:06:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.669 19:06:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.669 19:06:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.669 19:06:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.669 19:06:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.669 19:06:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.669 "name": "raid_bdev1", 00:17:07.669 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:07.669 "strip_size_kb": 64, 00:17:07.669 "state": "online", 00:17:07.669 "raid_level": "raid5f", 00:17:07.669 "superblock": true, 00:17:07.669 "num_base_bdevs": 3, 00:17:07.669 "num_base_bdevs_discovered": 3, 00:17:07.669 "num_base_bdevs_operational": 3, 00:17:07.669 "process": { 00:17:07.669 "type": "rebuild", 00:17:07.669 "target": "spare", 00:17:07.669 "progress": { 00:17:07.669 "blocks": 94208, 00:17:07.669 "percent": 74 00:17:07.669 } 00:17:07.669 }, 00:17:07.669 "base_bdevs_list": [ 00:17:07.669 { 00:17:07.669 "name": "spare", 00:17:07.669 "uuid": "d8c33a1a-03e8-5500-b1b6-eb1b7381776f", 00:17:07.669 "is_configured": true, 00:17:07.669 "data_offset": 2048, 00:17:07.669 "data_size": 63488 00:17:07.669 }, 00:17:07.669 { 00:17:07.669 "name": "BaseBdev2", 00:17:07.669 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:07.669 "is_configured": true, 00:17:07.669 "data_offset": 2048, 00:17:07.669 "data_size": 63488 00:17:07.669 }, 00:17:07.669 { 00:17:07.669 "name": "BaseBdev3", 00:17:07.670 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:07.670 "is_configured": true, 00:17:07.670 "data_offset": 2048, 00:17:07.670 "data_size": 63488 00:17:07.670 } 00:17:07.670 ] 00:17:07.670 }' 00:17:07.670 19:06:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.927 19:06:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:07.927 19:06:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.927 19:06:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.927 19:06:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:08.859 19:06:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:08.859 19:06:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.859 19:06:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.859 19:06:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.859 19:06:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.859 19:06:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.859 19:06:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.859 19:06:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.859 19:06:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.859 19:06:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.859 19:06:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.859 19:06:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.859 "name": "raid_bdev1", 00:17:08.859 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:08.859 "strip_size_kb": 64, 00:17:08.859 "state": "online", 00:17:08.860 "raid_level": "raid5f", 00:17:08.860 "superblock": true, 00:17:08.860 "num_base_bdevs": 3, 00:17:08.860 "num_base_bdevs_discovered": 3, 00:17:08.860 "num_base_bdevs_operational": 3, 00:17:08.860 "process": { 00:17:08.860 "type": "rebuild", 00:17:08.860 "target": "spare", 00:17:08.860 "progress": { 00:17:08.860 "blocks": 116736, 00:17:08.860 "percent": 91 00:17:08.860 } 00:17:08.860 }, 00:17:08.860 "base_bdevs_list": [ 00:17:08.860 { 00:17:08.860 "name": "spare", 00:17:08.860 "uuid": "d8c33a1a-03e8-5500-b1b6-eb1b7381776f", 00:17:08.860 "is_configured": true, 00:17:08.860 "data_offset": 2048, 00:17:08.860 "data_size": 63488 00:17:08.860 }, 00:17:08.860 { 00:17:08.860 "name": "BaseBdev2", 00:17:08.860 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:08.860 "is_configured": true, 00:17:08.860 "data_offset": 2048, 00:17:08.860 "data_size": 63488 00:17:08.860 }, 00:17:08.860 { 00:17:08.860 "name": "BaseBdev3", 00:17:08.860 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:08.860 "is_configured": true, 00:17:08.860 "data_offset": 2048, 00:17:08.860 "data_size": 63488 00:17:08.860 } 00:17:08.860 ] 00:17:08.860 }' 00:17:08.860 19:06:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.860 19:06:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.860 19:06:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.118 19:06:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.118 19:06:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:09.377 [2024-11-26 19:06:35.830551] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:09.377 [2024-11-26 19:06:35.830976] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:09.377 [2024-11-26 19:06:35.831184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.945 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.945 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.945 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.945 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.945 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.945 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.945 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.945 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.945 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.945 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.945 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.204 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.204 "name": "raid_bdev1", 00:17:10.204 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:10.204 "strip_size_kb": 64, 00:17:10.204 "state": "online", 00:17:10.204 "raid_level": "raid5f", 00:17:10.204 "superblock": true, 00:17:10.204 "num_base_bdevs": 3, 00:17:10.204 "num_base_bdevs_discovered": 3, 00:17:10.204 "num_base_bdevs_operational": 3, 00:17:10.204 "base_bdevs_list": [ 00:17:10.204 { 00:17:10.204 "name": "spare", 00:17:10.204 "uuid": "d8c33a1a-03e8-5500-b1b6-eb1b7381776f", 00:17:10.204 "is_configured": true, 00:17:10.204 "data_offset": 2048, 00:17:10.204 "data_size": 63488 00:17:10.204 }, 00:17:10.204 { 00:17:10.204 "name": "BaseBdev2", 00:17:10.204 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:10.204 "is_configured": true, 00:17:10.204 "data_offset": 2048, 00:17:10.204 "data_size": 63488 00:17:10.204 }, 00:17:10.204 { 00:17:10.204 "name": "BaseBdev3", 00:17:10.204 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:10.204 "is_configured": true, 00:17:10.204 "data_offset": 2048, 00:17:10.204 "data_size": 63488 00:17:10.204 } 00:17:10.204 ] 00:17:10.204 }' 00:17:10.204 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.204 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:10.204 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.204 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:10.204 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:10.204 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:10.204 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.204 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:10.204 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:10.204 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.204 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.204 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.204 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.204 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.204 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.204 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.204 "name": "raid_bdev1", 00:17:10.204 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:10.204 "strip_size_kb": 64, 00:17:10.204 "state": "online", 00:17:10.204 "raid_level": "raid5f", 00:17:10.204 "superblock": true, 00:17:10.204 "num_base_bdevs": 3, 00:17:10.204 "num_base_bdevs_discovered": 3, 00:17:10.204 "num_base_bdevs_operational": 3, 00:17:10.204 "base_bdevs_list": [ 00:17:10.204 { 00:17:10.204 "name": "spare", 00:17:10.204 "uuid": "d8c33a1a-03e8-5500-b1b6-eb1b7381776f", 00:17:10.204 "is_configured": true, 00:17:10.204 "data_offset": 2048, 00:17:10.204 "data_size": 63488 00:17:10.204 }, 00:17:10.204 { 00:17:10.204 "name": "BaseBdev2", 00:17:10.204 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:10.204 "is_configured": true, 00:17:10.204 "data_offset": 2048, 00:17:10.204 "data_size": 63488 00:17:10.204 }, 00:17:10.204 { 00:17:10.204 "name": "BaseBdev3", 00:17:10.204 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:10.204 "is_configured": true, 00:17:10.204 "data_offset": 2048, 00:17:10.204 "data_size": 63488 00:17:10.204 } 00:17:10.204 ] 00:17:10.204 }' 00:17:10.204 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.204 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:10.204 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.468 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:10.468 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:10.468 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.468 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.468 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.468 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.468 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:10.468 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.468 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.468 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.468 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.468 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.468 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.468 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.468 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.468 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.468 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.468 "name": "raid_bdev1", 00:17:10.468 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:10.468 "strip_size_kb": 64, 00:17:10.468 "state": "online", 00:17:10.468 "raid_level": "raid5f", 00:17:10.468 "superblock": true, 00:17:10.468 "num_base_bdevs": 3, 00:17:10.468 "num_base_bdevs_discovered": 3, 00:17:10.468 "num_base_bdevs_operational": 3, 00:17:10.468 "base_bdevs_list": [ 00:17:10.468 { 00:17:10.468 "name": "spare", 00:17:10.468 "uuid": "d8c33a1a-03e8-5500-b1b6-eb1b7381776f", 00:17:10.468 "is_configured": true, 00:17:10.468 "data_offset": 2048, 00:17:10.468 "data_size": 63488 00:17:10.468 }, 00:17:10.468 { 00:17:10.468 "name": "BaseBdev2", 00:17:10.468 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:10.468 "is_configured": true, 00:17:10.468 "data_offset": 2048, 00:17:10.468 "data_size": 63488 00:17:10.468 }, 00:17:10.468 { 00:17:10.468 "name": "BaseBdev3", 00:17:10.468 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:10.468 "is_configured": true, 00:17:10.468 "data_offset": 2048, 00:17:10.468 "data_size": 63488 00:17:10.468 } 00:17:10.468 ] 00:17:10.468 }' 00:17:10.468 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.468 19:06:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.752 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:10.752 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.011 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.011 [2024-11-26 19:06:37.377409] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:11.011 [2024-11-26 19:06:37.377448] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:11.011 [2024-11-26 19:06:37.377570] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.011 [2024-11-26 19:06:37.377682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:11.011 [2024-11-26 19:06:37.377707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:11.011 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.012 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.012 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.012 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.012 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:11.012 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.012 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:11.012 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:11.012 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:11.012 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:11.012 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:11.012 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:11.012 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:11.012 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:11.012 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:11.012 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:11.012 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:11.012 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:11.012 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:11.269 /dev/nbd0 00:17:11.269 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:11.269 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:11.269 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:11.269 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:11.269 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:11.269 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:11.269 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:11.269 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:11.269 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:11.269 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:11.269 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:11.269 1+0 records in 00:17:11.269 1+0 records out 00:17:11.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578177 s, 7.1 MB/s 00:17:11.269 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.269 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:11.269 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.270 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:11.270 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:11.270 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:11.270 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:11.270 19:06:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:11.530 /dev/nbd1 00:17:11.530 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:11.530 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:11.530 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:11.530 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:11.530 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:11.530 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:11.530 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:11.530 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:11.530 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:11.530 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:11.530 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:11.530 1+0 records in 00:17:11.530 1+0 records out 00:17:11.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392591 s, 10.4 MB/s 00:17:11.530 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.530 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:11.530 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:11.788 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:11.788 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:11.788 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:11.788 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:11.788 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:11.788 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:11.788 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:11.788 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:11.788 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:11.788 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:11.788 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:11.788 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:12.047 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:12.047 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:12.047 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:12.047 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:12.047 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:12.047 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:12.047 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:12.047 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:12.047 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:12.047 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:12.616 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:12.616 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:12.616 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:12.616 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:12.616 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:12.616 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:12.616 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:12.616 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:12.616 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:12.616 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:12.616 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.616 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.616 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.616 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:12.616 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.616 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.616 [2024-11-26 19:06:38.981609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:12.616 [2024-11-26 19:06:38.981713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.616 [2024-11-26 19:06:38.981757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:12.616 [2024-11-26 19:06:38.981773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.616 [2024-11-26 19:06:38.985071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.616 [2024-11-26 19:06:38.985124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:12.616 [2024-11-26 19:06:38.985279] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:12.616 [2024-11-26 19:06:38.985393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:12.616 [2024-11-26 19:06:38.985567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:12.616 [2024-11-26 19:06:38.985799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:12.616 spare 00:17:12.616 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.616 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:12.616 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.616 19:06:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.616 [2024-11-26 19:06:39.085957] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:12.616 [2024-11-26 19:06:39.086129] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:12.616 [2024-11-26 19:06:39.086561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:17:12.616 [2024-11-26 19:06:39.091446] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:12.616 [2024-11-26 19:06:39.091596] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:12.616 [2024-11-26 19:06:39.091999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.616 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.616 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:12.616 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.616 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.616 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.616 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.616 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:12.616 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.616 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.616 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.616 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.616 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.616 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.616 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.616 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.616 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.616 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.616 "name": "raid_bdev1", 00:17:12.616 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:12.616 "strip_size_kb": 64, 00:17:12.616 "state": "online", 00:17:12.616 "raid_level": "raid5f", 00:17:12.616 "superblock": true, 00:17:12.616 "num_base_bdevs": 3, 00:17:12.616 "num_base_bdevs_discovered": 3, 00:17:12.616 "num_base_bdevs_operational": 3, 00:17:12.616 "base_bdevs_list": [ 00:17:12.616 { 00:17:12.616 "name": "spare", 00:17:12.616 "uuid": "d8c33a1a-03e8-5500-b1b6-eb1b7381776f", 00:17:12.616 "is_configured": true, 00:17:12.616 "data_offset": 2048, 00:17:12.616 "data_size": 63488 00:17:12.616 }, 00:17:12.616 { 00:17:12.616 "name": "BaseBdev2", 00:17:12.616 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:12.616 "is_configured": true, 00:17:12.616 "data_offset": 2048, 00:17:12.616 "data_size": 63488 00:17:12.616 }, 00:17:12.616 { 00:17:12.616 "name": "BaseBdev3", 00:17:12.616 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:12.616 "is_configured": true, 00:17:12.616 "data_offset": 2048, 00:17:12.616 "data_size": 63488 00:17:12.616 } 00:17:12.616 ] 00:17:12.616 }' 00:17:12.616 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.616 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.184 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:13.184 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.184 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:13.184 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:13.184 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.184 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.184 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.184 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.184 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.184 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.184 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.184 "name": "raid_bdev1", 00:17:13.184 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:13.184 "strip_size_kb": 64, 00:17:13.184 "state": "online", 00:17:13.184 "raid_level": "raid5f", 00:17:13.184 "superblock": true, 00:17:13.184 "num_base_bdevs": 3, 00:17:13.184 "num_base_bdevs_discovered": 3, 00:17:13.184 "num_base_bdevs_operational": 3, 00:17:13.184 "base_bdevs_list": [ 00:17:13.184 { 00:17:13.184 "name": "spare", 00:17:13.184 "uuid": "d8c33a1a-03e8-5500-b1b6-eb1b7381776f", 00:17:13.184 "is_configured": true, 00:17:13.184 "data_offset": 2048, 00:17:13.184 "data_size": 63488 00:17:13.184 }, 00:17:13.184 { 00:17:13.184 "name": "BaseBdev2", 00:17:13.184 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:13.184 "is_configured": true, 00:17:13.184 "data_offset": 2048, 00:17:13.184 "data_size": 63488 00:17:13.184 }, 00:17:13.184 { 00:17:13.184 "name": "BaseBdev3", 00:17:13.184 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:13.184 "is_configured": true, 00:17:13.184 "data_offset": 2048, 00:17:13.184 "data_size": 63488 00:17:13.184 } 00:17:13.184 ] 00:17:13.184 }' 00:17:13.184 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.184 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:13.184 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.184 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:13.184 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.184 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:13.184 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.184 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.184 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.443 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.443 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:13.443 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.443 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.443 [2024-11-26 19:06:39.818234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:13.443 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.443 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:13.443 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.443 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.443 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.443 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.443 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.443 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.443 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.444 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.444 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.444 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.444 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.444 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.444 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.444 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.444 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.444 "name": "raid_bdev1", 00:17:13.444 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:13.444 "strip_size_kb": 64, 00:17:13.444 "state": "online", 00:17:13.444 "raid_level": "raid5f", 00:17:13.444 "superblock": true, 00:17:13.444 "num_base_bdevs": 3, 00:17:13.444 "num_base_bdevs_discovered": 2, 00:17:13.444 "num_base_bdevs_operational": 2, 00:17:13.444 "base_bdevs_list": [ 00:17:13.444 { 00:17:13.444 "name": null, 00:17:13.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.444 "is_configured": false, 00:17:13.444 "data_offset": 0, 00:17:13.444 "data_size": 63488 00:17:13.444 }, 00:17:13.444 { 00:17:13.444 "name": "BaseBdev2", 00:17:13.444 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:13.444 "is_configured": true, 00:17:13.444 "data_offset": 2048, 00:17:13.444 "data_size": 63488 00:17:13.444 }, 00:17:13.444 { 00:17:13.444 "name": "BaseBdev3", 00:17:13.444 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:13.444 "is_configured": true, 00:17:13.444 "data_offset": 2048, 00:17:13.444 "data_size": 63488 00:17:13.444 } 00:17:13.444 ] 00:17:13.444 }' 00:17:13.444 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.444 19:06:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.012 19:06:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:14.012 19:06:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.012 19:06:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.012 [2024-11-26 19:06:40.362489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.012 [2024-11-26 19:06:40.362793] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:14.012 [2024-11-26 19:06:40.362819] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:14.012 [2024-11-26 19:06:40.362880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.012 [2024-11-26 19:06:40.378466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:17:14.012 19:06:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.012 19:06:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:14.012 [2024-11-26 19:06:40.385977] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:14.951 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.951 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.951 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.951 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.951 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.951 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.951 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.951 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.951 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.951 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.951 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.951 "name": "raid_bdev1", 00:17:14.951 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:14.951 "strip_size_kb": 64, 00:17:14.951 "state": "online", 00:17:14.951 "raid_level": "raid5f", 00:17:14.951 "superblock": true, 00:17:14.951 "num_base_bdevs": 3, 00:17:14.951 "num_base_bdevs_discovered": 3, 00:17:14.951 "num_base_bdevs_operational": 3, 00:17:14.951 "process": { 00:17:14.951 "type": "rebuild", 00:17:14.951 "target": "spare", 00:17:14.951 "progress": { 00:17:14.951 "blocks": 18432, 00:17:14.951 "percent": 14 00:17:14.951 } 00:17:14.951 }, 00:17:14.951 "base_bdevs_list": [ 00:17:14.951 { 00:17:14.951 "name": "spare", 00:17:14.951 "uuid": "d8c33a1a-03e8-5500-b1b6-eb1b7381776f", 00:17:14.951 "is_configured": true, 00:17:14.951 "data_offset": 2048, 00:17:14.951 "data_size": 63488 00:17:14.951 }, 00:17:14.951 { 00:17:14.951 "name": "BaseBdev2", 00:17:14.951 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:14.951 "is_configured": true, 00:17:14.951 "data_offset": 2048, 00:17:14.951 "data_size": 63488 00:17:14.951 }, 00:17:14.951 { 00:17:14.951 "name": "BaseBdev3", 00:17:14.951 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:14.951 "is_configured": true, 00:17:14.951 "data_offset": 2048, 00:17:14.951 "data_size": 63488 00:17:14.951 } 00:17:14.951 ] 00:17:14.951 }' 00:17:14.951 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.951 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.951 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.951 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.951 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:14.951 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.951 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.951 [2024-11-26 19:06:41.541072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.210 [2024-11-26 19:06:41.601414] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:15.210 [2024-11-26 19:06:41.601667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.210 [2024-11-26 19:06:41.601792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.210 [2024-11-26 19:06:41.601862] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:15.210 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.210 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:15.210 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.210 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.210 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.210 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.210 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:15.210 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.210 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.210 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.210 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.210 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.210 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.210 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.210 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.210 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.210 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.210 "name": "raid_bdev1", 00:17:15.210 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:15.210 "strip_size_kb": 64, 00:17:15.210 "state": "online", 00:17:15.210 "raid_level": "raid5f", 00:17:15.210 "superblock": true, 00:17:15.210 "num_base_bdevs": 3, 00:17:15.210 "num_base_bdevs_discovered": 2, 00:17:15.210 "num_base_bdevs_operational": 2, 00:17:15.210 "base_bdevs_list": [ 00:17:15.210 { 00:17:15.210 "name": null, 00:17:15.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.210 "is_configured": false, 00:17:15.210 "data_offset": 0, 00:17:15.210 "data_size": 63488 00:17:15.210 }, 00:17:15.210 { 00:17:15.210 "name": "BaseBdev2", 00:17:15.210 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:15.210 "is_configured": true, 00:17:15.211 "data_offset": 2048, 00:17:15.211 "data_size": 63488 00:17:15.211 }, 00:17:15.211 { 00:17:15.211 "name": "BaseBdev3", 00:17:15.211 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:15.211 "is_configured": true, 00:17:15.211 "data_offset": 2048, 00:17:15.211 "data_size": 63488 00:17:15.211 } 00:17:15.211 ] 00:17:15.211 }' 00:17:15.211 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.211 19:06:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.779 19:06:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:15.779 19:06:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.779 19:06:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.779 [2024-11-26 19:06:42.226159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:15.779 [2024-11-26 19:06:42.226468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.779 [2024-11-26 19:06:42.226511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:17:15.779 [2024-11-26 19:06:42.226534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.779 [2024-11-26 19:06:42.227387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.779 [2024-11-26 19:06:42.227439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:15.779 [2024-11-26 19:06:42.227586] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:15.779 [2024-11-26 19:06:42.227615] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:15.779 [2024-11-26 19:06:42.227631] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:15.779 [2024-11-26 19:06:42.227664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:15.779 [2024-11-26 19:06:42.244117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:17:15.779 spare 00:17:15.779 19:06:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.779 19:06:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:15.779 [2024-11-26 19:06:42.252190] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:16.772 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.772 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.772 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.772 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.772 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.772 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.772 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.772 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.772 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.772 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.772 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.772 "name": "raid_bdev1", 00:17:16.772 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:16.772 "strip_size_kb": 64, 00:17:16.772 "state": "online", 00:17:16.772 "raid_level": "raid5f", 00:17:16.772 "superblock": true, 00:17:16.772 "num_base_bdevs": 3, 00:17:16.772 "num_base_bdevs_discovered": 3, 00:17:16.772 "num_base_bdevs_operational": 3, 00:17:16.772 "process": { 00:17:16.772 "type": "rebuild", 00:17:16.772 "target": "spare", 00:17:16.772 "progress": { 00:17:16.772 "blocks": 18432, 00:17:16.772 "percent": 14 00:17:16.772 } 00:17:16.772 }, 00:17:16.772 "base_bdevs_list": [ 00:17:16.772 { 00:17:16.772 "name": "spare", 00:17:16.772 "uuid": "d8c33a1a-03e8-5500-b1b6-eb1b7381776f", 00:17:16.772 "is_configured": true, 00:17:16.772 "data_offset": 2048, 00:17:16.772 "data_size": 63488 00:17:16.772 }, 00:17:16.772 { 00:17:16.772 "name": "BaseBdev2", 00:17:16.772 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:16.772 "is_configured": true, 00:17:16.772 "data_offset": 2048, 00:17:16.772 "data_size": 63488 00:17:16.772 }, 00:17:16.772 { 00:17:16.772 "name": "BaseBdev3", 00:17:16.772 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:16.772 "is_configured": true, 00:17:16.772 "data_offset": 2048, 00:17:16.772 "data_size": 63488 00:17:16.772 } 00:17:16.772 ] 00:17:16.772 }' 00:17:16.772 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.772 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.772 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.031 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.031 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:17.031 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.031 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.031 [2024-11-26 19:06:43.426931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.031 [2024-11-26 19:06:43.471120] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:17.031 [2024-11-26 19:06:43.471712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.031 [2024-11-26 19:06:43.472024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.031 [2024-11-26 19:06:43.472145] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:17.031 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.031 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:17.031 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.031 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.031 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:17.031 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:17.031 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:17.031 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.031 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.031 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.031 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.031 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.031 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.031 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.031 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.031 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.031 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.031 "name": "raid_bdev1", 00:17:17.031 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:17.031 "strip_size_kb": 64, 00:17:17.032 "state": "online", 00:17:17.032 "raid_level": "raid5f", 00:17:17.032 "superblock": true, 00:17:17.032 "num_base_bdevs": 3, 00:17:17.032 "num_base_bdevs_discovered": 2, 00:17:17.032 "num_base_bdevs_operational": 2, 00:17:17.032 "base_bdevs_list": [ 00:17:17.032 { 00:17:17.032 "name": null, 00:17:17.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.032 "is_configured": false, 00:17:17.032 "data_offset": 0, 00:17:17.032 "data_size": 63488 00:17:17.032 }, 00:17:17.032 { 00:17:17.032 "name": "BaseBdev2", 00:17:17.032 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:17.032 "is_configured": true, 00:17:17.032 "data_offset": 2048, 00:17:17.032 "data_size": 63488 00:17:17.032 }, 00:17:17.032 { 00:17:17.032 "name": "BaseBdev3", 00:17:17.032 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:17.032 "is_configured": true, 00:17:17.032 "data_offset": 2048, 00:17:17.032 "data_size": 63488 00:17:17.032 } 00:17:17.032 ] 00:17:17.032 }' 00:17:17.032 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.032 19:06:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.600 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:17.600 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.600 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:17.600 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:17.600 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.600 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.600 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.600 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.600 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.600 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.600 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.600 "name": "raid_bdev1", 00:17:17.600 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:17.600 "strip_size_kb": 64, 00:17:17.600 "state": "online", 00:17:17.600 "raid_level": "raid5f", 00:17:17.600 "superblock": true, 00:17:17.600 "num_base_bdevs": 3, 00:17:17.600 "num_base_bdevs_discovered": 2, 00:17:17.600 "num_base_bdevs_operational": 2, 00:17:17.600 "base_bdevs_list": [ 00:17:17.600 { 00:17:17.600 "name": null, 00:17:17.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.600 "is_configured": false, 00:17:17.600 "data_offset": 0, 00:17:17.600 "data_size": 63488 00:17:17.600 }, 00:17:17.600 { 00:17:17.600 "name": "BaseBdev2", 00:17:17.600 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:17.600 "is_configured": true, 00:17:17.600 "data_offset": 2048, 00:17:17.600 "data_size": 63488 00:17:17.600 }, 00:17:17.600 { 00:17:17.600 "name": "BaseBdev3", 00:17:17.600 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:17.600 "is_configured": true, 00:17:17.600 "data_offset": 2048, 00:17:17.600 "data_size": 63488 00:17:17.600 } 00:17:17.600 ] 00:17:17.600 }' 00:17:17.600 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.600 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:17.600 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.600 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:17.600 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:17.600 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.600 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.601 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.601 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:17.601 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.601 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.601 [2024-11-26 19:06:44.217382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:17.601 [2024-11-26 19:06:44.217797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.601 [2024-11-26 19:06:44.217902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:17.601 [2024-11-26 19:06:44.217932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.601 [2024-11-26 19:06:44.219098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.601 [2024-11-26 19:06:44.219182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:17.601 [2024-11-26 19:06:44.219444] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:17.601 [2024-11-26 19:06:44.219489] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:17.601 [2024-11-26 19:06:44.219534] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:17.601 [2024-11-26 19:06:44.219558] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:17.860 BaseBdev1 00:17:17.860 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.860 19:06:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:18.797 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:18.797 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.797 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.797 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.797 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.797 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.797 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.797 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.797 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.797 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.797 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.797 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.797 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.797 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.797 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.797 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.797 "name": "raid_bdev1", 00:17:18.797 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:18.797 "strip_size_kb": 64, 00:17:18.797 "state": "online", 00:17:18.797 "raid_level": "raid5f", 00:17:18.797 "superblock": true, 00:17:18.797 "num_base_bdevs": 3, 00:17:18.797 "num_base_bdevs_discovered": 2, 00:17:18.797 "num_base_bdevs_operational": 2, 00:17:18.797 "base_bdevs_list": [ 00:17:18.797 { 00:17:18.797 "name": null, 00:17:18.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.797 "is_configured": false, 00:17:18.797 "data_offset": 0, 00:17:18.797 "data_size": 63488 00:17:18.797 }, 00:17:18.797 { 00:17:18.797 "name": "BaseBdev2", 00:17:18.797 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:18.797 "is_configured": true, 00:17:18.797 "data_offset": 2048, 00:17:18.797 "data_size": 63488 00:17:18.797 }, 00:17:18.797 { 00:17:18.797 "name": "BaseBdev3", 00:17:18.797 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:18.797 "is_configured": true, 00:17:18.797 "data_offset": 2048, 00:17:18.797 "data_size": 63488 00:17:18.797 } 00:17:18.797 ] 00:17:18.797 }' 00:17:18.797 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.797 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.365 "name": "raid_bdev1", 00:17:19.365 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:19.365 "strip_size_kb": 64, 00:17:19.365 "state": "online", 00:17:19.365 "raid_level": "raid5f", 00:17:19.365 "superblock": true, 00:17:19.365 "num_base_bdevs": 3, 00:17:19.365 "num_base_bdevs_discovered": 2, 00:17:19.365 "num_base_bdevs_operational": 2, 00:17:19.365 "base_bdevs_list": [ 00:17:19.365 { 00:17:19.365 "name": null, 00:17:19.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.365 "is_configured": false, 00:17:19.365 "data_offset": 0, 00:17:19.365 "data_size": 63488 00:17:19.365 }, 00:17:19.365 { 00:17:19.365 "name": "BaseBdev2", 00:17:19.365 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:19.365 "is_configured": true, 00:17:19.365 "data_offset": 2048, 00:17:19.365 "data_size": 63488 00:17:19.365 }, 00:17:19.365 { 00:17:19.365 "name": "BaseBdev3", 00:17:19.365 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:19.365 "is_configured": true, 00:17:19.365 "data_offset": 2048, 00:17:19.365 "data_size": 63488 00:17:19.365 } 00:17:19.365 ] 00:17:19.365 }' 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.365 [2024-11-26 19:06:45.905992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.365 [2024-11-26 19:06:45.906238] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:19.365 [2024-11-26 19:06:45.906278] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:19.365 request: 00:17:19.365 { 00:17:19.365 "base_bdev": "BaseBdev1", 00:17:19.365 "raid_bdev": "raid_bdev1", 00:17:19.365 "method": "bdev_raid_add_base_bdev", 00:17:19.365 "req_id": 1 00:17:19.365 } 00:17:19.365 Got JSON-RPC error response 00:17:19.365 response: 00:17:19.365 { 00:17:19.365 "code": -22, 00:17:19.365 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:19.365 } 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.365 19:06:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:20.301 19:06:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:20.301 19:06:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.301 19:06:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.301 19:06:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.301 19:06:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.301 19:06:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:20.301 19:06:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.301 19:06:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.301 19:06:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.301 19:06:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.301 19:06:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.301 19:06:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.560 19:06:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.560 19:06:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.560 19:06:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.560 19:06:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.560 "name": "raid_bdev1", 00:17:20.560 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:20.560 "strip_size_kb": 64, 00:17:20.560 "state": "online", 00:17:20.560 "raid_level": "raid5f", 00:17:20.560 "superblock": true, 00:17:20.560 "num_base_bdevs": 3, 00:17:20.560 "num_base_bdevs_discovered": 2, 00:17:20.560 "num_base_bdevs_operational": 2, 00:17:20.560 "base_bdevs_list": [ 00:17:20.560 { 00:17:20.560 "name": null, 00:17:20.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.560 "is_configured": false, 00:17:20.560 "data_offset": 0, 00:17:20.560 "data_size": 63488 00:17:20.560 }, 00:17:20.560 { 00:17:20.560 "name": "BaseBdev2", 00:17:20.560 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:20.560 "is_configured": true, 00:17:20.560 "data_offset": 2048, 00:17:20.560 "data_size": 63488 00:17:20.560 }, 00:17:20.560 { 00:17:20.560 "name": "BaseBdev3", 00:17:20.560 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:20.560 "is_configured": true, 00:17:20.560 "data_offset": 2048, 00:17:20.560 "data_size": 63488 00:17:20.560 } 00:17:20.560 ] 00:17:20.560 }' 00:17:20.560 19:06:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.560 19:06:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.129 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:21.129 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.129 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:21.129 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:21.129 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.129 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.129 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.129 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.129 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.129 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.129 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.129 "name": "raid_bdev1", 00:17:21.129 "uuid": "0105a334-85ba-4d12-ac07-9931a4605029", 00:17:21.129 "strip_size_kb": 64, 00:17:21.129 "state": "online", 00:17:21.129 "raid_level": "raid5f", 00:17:21.129 "superblock": true, 00:17:21.129 "num_base_bdevs": 3, 00:17:21.129 "num_base_bdevs_discovered": 2, 00:17:21.129 "num_base_bdevs_operational": 2, 00:17:21.129 "base_bdevs_list": [ 00:17:21.129 { 00:17:21.129 "name": null, 00:17:21.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.129 "is_configured": false, 00:17:21.129 "data_offset": 0, 00:17:21.129 "data_size": 63488 00:17:21.129 }, 00:17:21.129 { 00:17:21.129 "name": "BaseBdev2", 00:17:21.129 "uuid": "fe31a546-4359-5907-bff9-d99c112d0471", 00:17:21.129 "is_configured": true, 00:17:21.129 "data_offset": 2048, 00:17:21.129 "data_size": 63488 00:17:21.129 }, 00:17:21.129 { 00:17:21.129 "name": "BaseBdev3", 00:17:21.129 "uuid": "4c52ee07-9c3a-582f-bd88-dd056b3dca32", 00:17:21.129 "is_configured": true, 00:17:21.129 "data_offset": 2048, 00:17:21.129 "data_size": 63488 00:17:21.129 } 00:17:21.129 ] 00:17:21.129 }' 00:17:21.129 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.129 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:21.129 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.129 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:21.129 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82896 00:17:21.129 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82896 ']' 00:17:21.129 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82896 00:17:21.130 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:21.130 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.130 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82896 00:17:21.130 killing process with pid 82896 00:17:21.130 Received shutdown signal, test time was about 60.000000 seconds 00:17:21.130 00:17:21.130 Latency(us) 00:17:21.130 [2024-11-26T19:06:47.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.130 [2024-11-26T19:06:47.753Z] =================================================================================================================== 00:17:21.130 [2024-11-26T19:06:47.753Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:21.130 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.130 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.130 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82896' 00:17:21.130 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82896 00:17:21.130 [2024-11-26 19:06:47.663032] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:21.130 19:06:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82896 00:17:21.130 [2024-11-26 19:06:47.663216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.130 [2024-11-26 19:06:47.663328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.130 [2024-11-26 19:06:47.663349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:21.697 [2024-11-26 19:06:48.073941] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:23.077 ************************************ 00:17:23.077 END TEST raid5f_rebuild_test_sb 00:17:23.077 ************************************ 00:17:23.077 19:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:23.077 00:17:23.077 real 0m25.335s 00:17:23.077 user 0m33.599s 00:17:23.077 sys 0m2.811s 00:17:23.077 19:06:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.077 19:06:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.077 19:06:49 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:17:23.077 19:06:49 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:17:23.077 19:06:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:23.077 19:06:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.077 19:06:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.077 ************************************ 00:17:23.077 START TEST raid5f_state_function_test 00:17:23.077 ************************************ 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83665 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83665' 00:17:23.077 Process raid pid: 83665 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83665 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83665 ']' 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.077 19:06:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.077 [2024-11-26 19:06:49.470781] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:17:23.077 [2024-11-26 19:06:49.471008] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.077 [2024-11-26 19:06:49.688097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.336 [2024-11-26 19:06:49.879418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.595 [2024-11-26 19:06:50.136985] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.595 [2024-11-26 19:06:50.137035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.855 [2024-11-26 19:06:50.396963] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:23.855 [2024-11-26 19:06:50.397028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:23.855 [2024-11-26 19:06:50.397046] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:23.855 [2024-11-26 19:06:50.397062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:23.855 [2024-11-26 19:06:50.397072] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:23.855 [2024-11-26 19:06:50.397086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:23.855 [2024-11-26 19:06:50.397096] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:23.855 [2024-11-26 19:06:50.397110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.855 "name": "Existed_Raid", 00:17:23.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.855 "strip_size_kb": 64, 00:17:23.855 "state": "configuring", 00:17:23.855 "raid_level": "raid5f", 00:17:23.855 "superblock": false, 00:17:23.855 "num_base_bdevs": 4, 00:17:23.855 "num_base_bdevs_discovered": 0, 00:17:23.855 "num_base_bdevs_operational": 4, 00:17:23.855 "base_bdevs_list": [ 00:17:23.855 { 00:17:23.855 "name": "BaseBdev1", 00:17:23.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.855 "is_configured": false, 00:17:23.855 "data_offset": 0, 00:17:23.855 "data_size": 0 00:17:23.855 }, 00:17:23.855 { 00:17:23.855 "name": "BaseBdev2", 00:17:23.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.855 "is_configured": false, 00:17:23.855 "data_offset": 0, 00:17:23.855 "data_size": 0 00:17:23.855 }, 00:17:23.855 { 00:17:23.855 "name": "BaseBdev3", 00:17:23.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.855 "is_configured": false, 00:17:23.855 "data_offset": 0, 00:17:23.855 "data_size": 0 00:17:23.855 }, 00:17:23.855 { 00:17:23.855 "name": "BaseBdev4", 00:17:23.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.855 "is_configured": false, 00:17:23.855 "data_offset": 0, 00:17:23.855 "data_size": 0 00:17:23.855 } 00:17:23.855 ] 00:17:23.855 }' 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.855 19:06:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.423 19:06:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:24.423 19:06:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.423 19:06:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.423 [2024-11-26 19:06:50.945088] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.423 [2024-11-26 19:06:50.945148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:24.423 19:06:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.423 19:06:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:24.423 19:06:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.423 19:06:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.423 [2024-11-26 19:06:50.953094] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:24.423 [2024-11-26 19:06:50.953149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:24.423 [2024-11-26 19:06:50.953165] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.423 [2024-11-26 19:06:50.953180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.423 [2024-11-26 19:06:50.953190] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:24.423 [2024-11-26 19:06:50.953205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:24.423 [2024-11-26 19:06:50.953215] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:24.423 [2024-11-26 19:06:50.953229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:24.423 19:06:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.423 19:06:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:24.423 19:06:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.423 19:06:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.423 [2024-11-26 19:06:51.009030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.423 BaseBdev1 00:17:24.423 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.423 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:24.423 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:24.423 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:24.423 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:24.423 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:24.423 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:24.423 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:24.423 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.423 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.423 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.423 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:24.423 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.423 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.423 [ 00:17:24.423 { 00:17:24.423 "name": "BaseBdev1", 00:17:24.423 "aliases": [ 00:17:24.423 "59a31121-7967-4979-8e46-3923c3dce8a6" 00:17:24.423 ], 00:17:24.423 "product_name": "Malloc disk", 00:17:24.423 "block_size": 512, 00:17:24.423 "num_blocks": 65536, 00:17:24.423 "uuid": "59a31121-7967-4979-8e46-3923c3dce8a6", 00:17:24.423 "assigned_rate_limits": { 00:17:24.423 "rw_ios_per_sec": 0, 00:17:24.423 "rw_mbytes_per_sec": 0, 00:17:24.423 "r_mbytes_per_sec": 0, 00:17:24.423 "w_mbytes_per_sec": 0 00:17:24.423 }, 00:17:24.423 "claimed": true, 00:17:24.423 "claim_type": "exclusive_write", 00:17:24.423 "zoned": false, 00:17:24.423 "supported_io_types": { 00:17:24.423 "read": true, 00:17:24.423 "write": true, 00:17:24.423 "unmap": true, 00:17:24.423 "flush": true, 00:17:24.423 "reset": true, 00:17:24.423 "nvme_admin": false, 00:17:24.423 "nvme_io": false, 00:17:24.423 "nvme_io_md": false, 00:17:24.423 "write_zeroes": true, 00:17:24.423 "zcopy": true, 00:17:24.423 "get_zone_info": false, 00:17:24.423 "zone_management": false, 00:17:24.423 "zone_append": false, 00:17:24.423 "compare": false, 00:17:24.423 "compare_and_write": false, 00:17:24.423 "abort": true, 00:17:24.423 "seek_hole": false, 00:17:24.423 "seek_data": false, 00:17:24.423 "copy": true, 00:17:24.423 "nvme_iov_md": false 00:17:24.683 }, 00:17:24.683 "memory_domains": [ 00:17:24.683 { 00:17:24.683 "dma_device_id": "system", 00:17:24.683 "dma_device_type": 1 00:17:24.683 }, 00:17:24.683 { 00:17:24.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.683 "dma_device_type": 2 00:17:24.683 } 00:17:24.683 ], 00:17:24.683 "driver_specific": {} 00:17:24.683 } 00:17:24.683 ] 00:17:24.683 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.683 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:24.683 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:24.683 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.683 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.683 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.683 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.683 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:24.683 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.683 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.683 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.683 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.683 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.683 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.683 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.683 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.683 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.683 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.683 "name": "Existed_Raid", 00:17:24.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.683 "strip_size_kb": 64, 00:17:24.683 "state": "configuring", 00:17:24.683 "raid_level": "raid5f", 00:17:24.683 "superblock": false, 00:17:24.683 "num_base_bdevs": 4, 00:17:24.683 "num_base_bdevs_discovered": 1, 00:17:24.683 "num_base_bdevs_operational": 4, 00:17:24.683 "base_bdevs_list": [ 00:17:24.683 { 00:17:24.683 "name": "BaseBdev1", 00:17:24.683 "uuid": "59a31121-7967-4979-8e46-3923c3dce8a6", 00:17:24.683 "is_configured": true, 00:17:24.683 "data_offset": 0, 00:17:24.683 "data_size": 65536 00:17:24.683 }, 00:17:24.683 { 00:17:24.683 "name": "BaseBdev2", 00:17:24.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.683 "is_configured": false, 00:17:24.683 "data_offset": 0, 00:17:24.683 "data_size": 0 00:17:24.683 }, 00:17:24.683 { 00:17:24.683 "name": "BaseBdev3", 00:17:24.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.683 "is_configured": false, 00:17:24.683 "data_offset": 0, 00:17:24.683 "data_size": 0 00:17:24.683 }, 00:17:24.683 { 00:17:24.683 "name": "BaseBdev4", 00:17:24.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.683 "is_configured": false, 00:17:24.683 "data_offset": 0, 00:17:24.683 "data_size": 0 00:17:24.683 } 00:17:24.683 ] 00:17:24.683 }' 00:17:24.683 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.683 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.942 [2024-11-26 19:06:51.533259] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.942 [2024-11-26 19:06:51.533389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.942 [2024-11-26 19:06:51.541362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.942 [2024-11-26 19:06:51.543968] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.942 [2024-11-26 19:06:51.544028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.942 [2024-11-26 19:06:51.544045] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:24.942 [2024-11-26 19:06:51.544062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:24.942 [2024-11-26 19:06:51.544072] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:24.942 [2024-11-26 19:06:51.544085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.942 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.201 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.201 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.201 "name": "Existed_Raid", 00:17:25.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.201 "strip_size_kb": 64, 00:17:25.201 "state": "configuring", 00:17:25.201 "raid_level": "raid5f", 00:17:25.201 "superblock": false, 00:17:25.201 "num_base_bdevs": 4, 00:17:25.201 "num_base_bdevs_discovered": 1, 00:17:25.201 "num_base_bdevs_operational": 4, 00:17:25.201 "base_bdevs_list": [ 00:17:25.201 { 00:17:25.201 "name": "BaseBdev1", 00:17:25.201 "uuid": "59a31121-7967-4979-8e46-3923c3dce8a6", 00:17:25.201 "is_configured": true, 00:17:25.201 "data_offset": 0, 00:17:25.201 "data_size": 65536 00:17:25.201 }, 00:17:25.201 { 00:17:25.201 "name": "BaseBdev2", 00:17:25.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.201 "is_configured": false, 00:17:25.201 "data_offset": 0, 00:17:25.201 "data_size": 0 00:17:25.201 }, 00:17:25.201 { 00:17:25.201 "name": "BaseBdev3", 00:17:25.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.201 "is_configured": false, 00:17:25.201 "data_offset": 0, 00:17:25.201 "data_size": 0 00:17:25.201 }, 00:17:25.201 { 00:17:25.201 "name": "BaseBdev4", 00:17:25.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.201 "is_configured": false, 00:17:25.201 "data_offset": 0, 00:17:25.201 "data_size": 0 00:17:25.201 } 00:17:25.201 ] 00:17:25.201 }' 00:17:25.201 19:06:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.201 19:06:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.460 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:25.460 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.460 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.718 [2024-11-26 19:06:52.087747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:25.718 BaseBdev2 00:17:25.718 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.718 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:25.718 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:25.718 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:25.718 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:25.718 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:25.718 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:25.718 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:25.718 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.718 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.718 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.718 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:25.718 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.718 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.718 [ 00:17:25.718 { 00:17:25.718 "name": "BaseBdev2", 00:17:25.718 "aliases": [ 00:17:25.718 "da12ce0a-e571-4292-b827-2af4ee63a95c" 00:17:25.718 ], 00:17:25.718 "product_name": "Malloc disk", 00:17:25.718 "block_size": 512, 00:17:25.718 "num_blocks": 65536, 00:17:25.718 "uuid": "da12ce0a-e571-4292-b827-2af4ee63a95c", 00:17:25.718 "assigned_rate_limits": { 00:17:25.719 "rw_ios_per_sec": 0, 00:17:25.719 "rw_mbytes_per_sec": 0, 00:17:25.719 "r_mbytes_per_sec": 0, 00:17:25.719 "w_mbytes_per_sec": 0 00:17:25.719 }, 00:17:25.719 "claimed": true, 00:17:25.719 "claim_type": "exclusive_write", 00:17:25.719 "zoned": false, 00:17:25.719 "supported_io_types": { 00:17:25.719 "read": true, 00:17:25.719 "write": true, 00:17:25.719 "unmap": true, 00:17:25.719 "flush": true, 00:17:25.719 "reset": true, 00:17:25.719 "nvme_admin": false, 00:17:25.719 "nvme_io": false, 00:17:25.719 "nvme_io_md": false, 00:17:25.719 "write_zeroes": true, 00:17:25.719 "zcopy": true, 00:17:25.719 "get_zone_info": false, 00:17:25.719 "zone_management": false, 00:17:25.719 "zone_append": false, 00:17:25.719 "compare": false, 00:17:25.719 "compare_and_write": false, 00:17:25.719 "abort": true, 00:17:25.719 "seek_hole": false, 00:17:25.719 "seek_data": false, 00:17:25.719 "copy": true, 00:17:25.719 "nvme_iov_md": false 00:17:25.719 }, 00:17:25.719 "memory_domains": [ 00:17:25.719 { 00:17:25.719 "dma_device_id": "system", 00:17:25.719 "dma_device_type": 1 00:17:25.719 }, 00:17:25.719 { 00:17:25.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.719 "dma_device_type": 2 00:17:25.719 } 00:17:25.719 ], 00:17:25.719 "driver_specific": {} 00:17:25.719 } 00:17:25.719 ] 00:17:25.719 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.719 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:25.719 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:25.719 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:25.719 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:25.719 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.719 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.719 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.719 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.719 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:25.719 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.719 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.719 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.719 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.719 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.719 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.719 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.719 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.719 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.719 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.719 "name": "Existed_Raid", 00:17:25.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.719 "strip_size_kb": 64, 00:17:25.719 "state": "configuring", 00:17:25.719 "raid_level": "raid5f", 00:17:25.719 "superblock": false, 00:17:25.719 "num_base_bdevs": 4, 00:17:25.719 "num_base_bdevs_discovered": 2, 00:17:25.719 "num_base_bdevs_operational": 4, 00:17:25.719 "base_bdevs_list": [ 00:17:25.719 { 00:17:25.719 "name": "BaseBdev1", 00:17:25.719 "uuid": "59a31121-7967-4979-8e46-3923c3dce8a6", 00:17:25.719 "is_configured": true, 00:17:25.719 "data_offset": 0, 00:17:25.719 "data_size": 65536 00:17:25.719 }, 00:17:25.719 { 00:17:25.719 "name": "BaseBdev2", 00:17:25.719 "uuid": "da12ce0a-e571-4292-b827-2af4ee63a95c", 00:17:25.719 "is_configured": true, 00:17:25.719 "data_offset": 0, 00:17:25.719 "data_size": 65536 00:17:25.719 }, 00:17:25.719 { 00:17:25.719 "name": "BaseBdev3", 00:17:25.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.719 "is_configured": false, 00:17:25.719 "data_offset": 0, 00:17:25.719 "data_size": 0 00:17:25.719 }, 00:17:25.719 { 00:17:25.719 "name": "BaseBdev4", 00:17:25.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.719 "is_configured": false, 00:17:25.719 "data_offset": 0, 00:17:25.719 "data_size": 0 00:17:25.719 } 00:17:25.719 ] 00:17:25.719 }' 00:17:25.719 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.719 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.287 [2024-11-26 19:06:52.717671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:26.287 BaseBdev3 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.287 [ 00:17:26.287 { 00:17:26.287 "name": "BaseBdev3", 00:17:26.287 "aliases": [ 00:17:26.287 "bbb51474-cc2f-45bc-beb8-e7b80958097f" 00:17:26.287 ], 00:17:26.287 "product_name": "Malloc disk", 00:17:26.287 "block_size": 512, 00:17:26.287 "num_blocks": 65536, 00:17:26.287 "uuid": "bbb51474-cc2f-45bc-beb8-e7b80958097f", 00:17:26.287 "assigned_rate_limits": { 00:17:26.287 "rw_ios_per_sec": 0, 00:17:26.287 "rw_mbytes_per_sec": 0, 00:17:26.287 "r_mbytes_per_sec": 0, 00:17:26.287 "w_mbytes_per_sec": 0 00:17:26.287 }, 00:17:26.287 "claimed": true, 00:17:26.287 "claim_type": "exclusive_write", 00:17:26.287 "zoned": false, 00:17:26.287 "supported_io_types": { 00:17:26.287 "read": true, 00:17:26.287 "write": true, 00:17:26.287 "unmap": true, 00:17:26.287 "flush": true, 00:17:26.287 "reset": true, 00:17:26.287 "nvme_admin": false, 00:17:26.287 "nvme_io": false, 00:17:26.287 "nvme_io_md": false, 00:17:26.287 "write_zeroes": true, 00:17:26.287 "zcopy": true, 00:17:26.287 "get_zone_info": false, 00:17:26.287 "zone_management": false, 00:17:26.287 "zone_append": false, 00:17:26.287 "compare": false, 00:17:26.287 "compare_and_write": false, 00:17:26.287 "abort": true, 00:17:26.287 "seek_hole": false, 00:17:26.287 "seek_data": false, 00:17:26.287 "copy": true, 00:17:26.287 "nvme_iov_md": false 00:17:26.287 }, 00:17:26.287 "memory_domains": [ 00:17:26.287 { 00:17:26.287 "dma_device_id": "system", 00:17:26.287 "dma_device_type": 1 00:17:26.287 }, 00:17:26.287 { 00:17:26.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.287 "dma_device_type": 2 00:17:26.287 } 00:17:26.287 ], 00:17:26.287 "driver_specific": {} 00:17:26.287 } 00:17:26.287 ] 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.287 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.288 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.288 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.288 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.288 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.288 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.288 "name": "Existed_Raid", 00:17:26.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.288 "strip_size_kb": 64, 00:17:26.288 "state": "configuring", 00:17:26.288 "raid_level": "raid5f", 00:17:26.288 "superblock": false, 00:17:26.288 "num_base_bdevs": 4, 00:17:26.288 "num_base_bdevs_discovered": 3, 00:17:26.288 "num_base_bdevs_operational": 4, 00:17:26.288 "base_bdevs_list": [ 00:17:26.288 { 00:17:26.288 "name": "BaseBdev1", 00:17:26.288 "uuid": "59a31121-7967-4979-8e46-3923c3dce8a6", 00:17:26.288 "is_configured": true, 00:17:26.288 "data_offset": 0, 00:17:26.288 "data_size": 65536 00:17:26.288 }, 00:17:26.288 { 00:17:26.288 "name": "BaseBdev2", 00:17:26.288 "uuid": "da12ce0a-e571-4292-b827-2af4ee63a95c", 00:17:26.288 "is_configured": true, 00:17:26.288 "data_offset": 0, 00:17:26.288 "data_size": 65536 00:17:26.288 }, 00:17:26.288 { 00:17:26.288 "name": "BaseBdev3", 00:17:26.288 "uuid": "bbb51474-cc2f-45bc-beb8-e7b80958097f", 00:17:26.288 "is_configured": true, 00:17:26.288 "data_offset": 0, 00:17:26.288 "data_size": 65536 00:17:26.288 }, 00:17:26.288 { 00:17:26.288 "name": "BaseBdev4", 00:17:26.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.288 "is_configured": false, 00:17:26.288 "data_offset": 0, 00:17:26.288 "data_size": 0 00:17:26.288 } 00:17:26.288 ] 00:17:26.288 }' 00:17:26.288 19:06:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.288 19:06:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.970 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:26.970 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.970 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.970 [2024-11-26 19:06:53.319268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:26.970 [2024-11-26 19:06:53.319411] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:26.970 [2024-11-26 19:06:53.319429] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:26.970 [2024-11-26 19:06:53.319845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:26.970 [2024-11-26 19:06:53.327369] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:26.970 [2024-11-26 19:06:53.327431] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:26.970 [2024-11-26 19:06:53.327875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.970 BaseBdev4 00:17:26.970 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.970 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:26.970 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:26.970 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:26.970 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:26.970 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:26.970 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:26.970 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:26.970 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.970 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.970 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.970 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:26.970 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.971 [ 00:17:26.971 { 00:17:26.971 "name": "BaseBdev4", 00:17:26.971 "aliases": [ 00:17:26.971 "0e698777-7307-412b-80bd-f21500730f05" 00:17:26.971 ], 00:17:26.971 "product_name": "Malloc disk", 00:17:26.971 "block_size": 512, 00:17:26.971 "num_blocks": 65536, 00:17:26.971 "uuid": "0e698777-7307-412b-80bd-f21500730f05", 00:17:26.971 "assigned_rate_limits": { 00:17:26.971 "rw_ios_per_sec": 0, 00:17:26.971 "rw_mbytes_per_sec": 0, 00:17:26.971 "r_mbytes_per_sec": 0, 00:17:26.971 "w_mbytes_per_sec": 0 00:17:26.971 }, 00:17:26.971 "claimed": true, 00:17:26.971 "claim_type": "exclusive_write", 00:17:26.971 "zoned": false, 00:17:26.971 "supported_io_types": { 00:17:26.971 "read": true, 00:17:26.971 "write": true, 00:17:26.971 "unmap": true, 00:17:26.971 "flush": true, 00:17:26.971 "reset": true, 00:17:26.971 "nvme_admin": false, 00:17:26.971 "nvme_io": false, 00:17:26.971 "nvme_io_md": false, 00:17:26.971 "write_zeroes": true, 00:17:26.971 "zcopy": true, 00:17:26.971 "get_zone_info": false, 00:17:26.971 "zone_management": false, 00:17:26.971 "zone_append": false, 00:17:26.971 "compare": false, 00:17:26.971 "compare_and_write": false, 00:17:26.971 "abort": true, 00:17:26.971 "seek_hole": false, 00:17:26.971 "seek_data": false, 00:17:26.971 "copy": true, 00:17:26.971 "nvme_iov_md": false 00:17:26.971 }, 00:17:26.971 "memory_domains": [ 00:17:26.971 { 00:17:26.971 "dma_device_id": "system", 00:17:26.971 "dma_device_type": 1 00:17:26.971 }, 00:17:26.971 { 00:17:26.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.971 "dma_device_type": 2 00:17:26.971 } 00:17:26.971 ], 00:17:26.971 "driver_specific": {} 00:17:26.971 } 00:17:26.971 ] 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.971 "name": "Existed_Raid", 00:17:26.971 "uuid": "550d5282-d799-41a8-acb9-4956cd98b0fc", 00:17:26.971 "strip_size_kb": 64, 00:17:26.971 "state": "online", 00:17:26.971 "raid_level": "raid5f", 00:17:26.971 "superblock": false, 00:17:26.971 "num_base_bdevs": 4, 00:17:26.971 "num_base_bdevs_discovered": 4, 00:17:26.971 "num_base_bdevs_operational": 4, 00:17:26.971 "base_bdevs_list": [ 00:17:26.971 { 00:17:26.971 "name": "BaseBdev1", 00:17:26.971 "uuid": "59a31121-7967-4979-8e46-3923c3dce8a6", 00:17:26.971 "is_configured": true, 00:17:26.971 "data_offset": 0, 00:17:26.971 "data_size": 65536 00:17:26.971 }, 00:17:26.971 { 00:17:26.971 "name": "BaseBdev2", 00:17:26.971 "uuid": "da12ce0a-e571-4292-b827-2af4ee63a95c", 00:17:26.971 "is_configured": true, 00:17:26.971 "data_offset": 0, 00:17:26.971 "data_size": 65536 00:17:26.971 }, 00:17:26.971 { 00:17:26.971 "name": "BaseBdev3", 00:17:26.971 "uuid": "bbb51474-cc2f-45bc-beb8-e7b80958097f", 00:17:26.971 "is_configured": true, 00:17:26.971 "data_offset": 0, 00:17:26.971 "data_size": 65536 00:17:26.971 }, 00:17:26.971 { 00:17:26.971 "name": "BaseBdev4", 00:17:26.971 "uuid": "0e698777-7307-412b-80bd-f21500730f05", 00:17:26.971 "is_configured": true, 00:17:26.971 "data_offset": 0, 00:17:26.971 "data_size": 65536 00:17:26.971 } 00:17:26.971 ] 00:17:26.971 }' 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.971 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.540 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:27.540 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:27.540 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:27.540 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:27.540 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:27.540 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:27.540 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:27.541 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:27.541 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.541 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.541 [2024-11-26 19:06:53.905041] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:27.541 19:06:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.541 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:27.541 "name": "Existed_Raid", 00:17:27.541 "aliases": [ 00:17:27.541 "550d5282-d799-41a8-acb9-4956cd98b0fc" 00:17:27.541 ], 00:17:27.541 "product_name": "Raid Volume", 00:17:27.541 "block_size": 512, 00:17:27.541 "num_blocks": 196608, 00:17:27.541 "uuid": "550d5282-d799-41a8-acb9-4956cd98b0fc", 00:17:27.541 "assigned_rate_limits": { 00:17:27.541 "rw_ios_per_sec": 0, 00:17:27.541 "rw_mbytes_per_sec": 0, 00:17:27.541 "r_mbytes_per_sec": 0, 00:17:27.541 "w_mbytes_per_sec": 0 00:17:27.541 }, 00:17:27.541 "claimed": false, 00:17:27.541 "zoned": false, 00:17:27.541 "supported_io_types": { 00:17:27.541 "read": true, 00:17:27.541 "write": true, 00:17:27.541 "unmap": false, 00:17:27.541 "flush": false, 00:17:27.541 "reset": true, 00:17:27.541 "nvme_admin": false, 00:17:27.541 "nvme_io": false, 00:17:27.541 "nvme_io_md": false, 00:17:27.541 "write_zeroes": true, 00:17:27.541 "zcopy": false, 00:17:27.541 "get_zone_info": false, 00:17:27.541 "zone_management": false, 00:17:27.541 "zone_append": false, 00:17:27.541 "compare": false, 00:17:27.541 "compare_and_write": false, 00:17:27.541 "abort": false, 00:17:27.541 "seek_hole": false, 00:17:27.541 "seek_data": false, 00:17:27.541 "copy": false, 00:17:27.541 "nvme_iov_md": false 00:17:27.541 }, 00:17:27.541 "driver_specific": { 00:17:27.541 "raid": { 00:17:27.541 "uuid": "550d5282-d799-41a8-acb9-4956cd98b0fc", 00:17:27.541 "strip_size_kb": 64, 00:17:27.541 "state": "online", 00:17:27.541 "raid_level": "raid5f", 00:17:27.541 "superblock": false, 00:17:27.541 "num_base_bdevs": 4, 00:17:27.541 "num_base_bdevs_discovered": 4, 00:17:27.541 "num_base_bdevs_operational": 4, 00:17:27.541 "base_bdevs_list": [ 00:17:27.541 { 00:17:27.541 "name": "BaseBdev1", 00:17:27.541 "uuid": "59a31121-7967-4979-8e46-3923c3dce8a6", 00:17:27.541 "is_configured": true, 00:17:27.541 "data_offset": 0, 00:17:27.541 "data_size": 65536 00:17:27.541 }, 00:17:27.541 { 00:17:27.541 "name": "BaseBdev2", 00:17:27.541 "uuid": "da12ce0a-e571-4292-b827-2af4ee63a95c", 00:17:27.541 "is_configured": true, 00:17:27.541 "data_offset": 0, 00:17:27.541 "data_size": 65536 00:17:27.541 }, 00:17:27.541 { 00:17:27.541 "name": "BaseBdev3", 00:17:27.541 "uuid": "bbb51474-cc2f-45bc-beb8-e7b80958097f", 00:17:27.541 "is_configured": true, 00:17:27.541 "data_offset": 0, 00:17:27.541 "data_size": 65536 00:17:27.541 }, 00:17:27.541 { 00:17:27.541 "name": "BaseBdev4", 00:17:27.541 "uuid": "0e698777-7307-412b-80bd-f21500730f05", 00:17:27.541 "is_configured": true, 00:17:27.541 "data_offset": 0, 00:17:27.541 "data_size": 65536 00:17:27.541 } 00:17:27.541 ] 00:17:27.541 } 00:17:27.541 } 00:17:27.541 }' 00:17:27.541 19:06:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:27.541 BaseBdev2 00:17:27.541 BaseBdev3 00:17:27.541 BaseBdev4' 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.541 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.800 [2024-11-26 19:06:54.273046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.800 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.059 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.059 "name": "Existed_Raid", 00:17:28.059 "uuid": "550d5282-d799-41a8-acb9-4956cd98b0fc", 00:17:28.059 "strip_size_kb": 64, 00:17:28.059 "state": "online", 00:17:28.059 "raid_level": "raid5f", 00:17:28.059 "superblock": false, 00:17:28.059 "num_base_bdevs": 4, 00:17:28.059 "num_base_bdevs_discovered": 3, 00:17:28.059 "num_base_bdevs_operational": 3, 00:17:28.059 "base_bdevs_list": [ 00:17:28.059 { 00:17:28.059 "name": null, 00:17:28.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.059 "is_configured": false, 00:17:28.059 "data_offset": 0, 00:17:28.059 "data_size": 65536 00:17:28.059 }, 00:17:28.059 { 00:17:28.059 "name": "BaseBdev2", 00:17:28.059 "uuid": "da12ce0a-e571-4292-b827-2af4ee63a95c", 00:17:28.059 "is_configured": true, 00:17:28.059 "data_offset": 0, 00:17:28.059 "data_size": 65536 00:17:28.059 }, 00:17:28.059 { 00:17:28.059 "name": "BaseBdev3", 00:17:28.059 "uuid": "bbb51474-cc2f-45bc-beb8-e7b80958097f", 00:17:28.059 "is_configured": true, 00:17:28.059 "data_offset": 0, 00:17:28.059 "data_size": 65536 00:17:28.059 }, 00:17:28.059 { 00:17:28.059 "name": "BaseBdev4", 00:17:28.059 "uuid": "0e698777-7307-412b-80bd-f21500730f05", 00:17:28.059 "is_configured": true, 00:17:28.059 "data_offset": 0, 00:17:28.059 "data_size": 65536 00:17:28.059 } 00:17:28.059 ] 00:17:28.059 }' 00:17:28.059 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.059 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.317 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:28.317 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:28.317 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.317 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.317 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:28.317 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.575 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.575 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:28.575 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:28.575 19:06:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:28.575 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.575 19:06:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.575 [2024-11-26 19:06:54.987568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:28.575 [2024-11-26 19:06:54.987741] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:28.575 [2024-11-26 19:06:55.068958] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:28.575 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.575 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:28.575 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:28.575 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.575 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:28.575 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.575 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.575 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.575 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:28.575 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:28.575 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:28.575 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.575 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.575 [2024-11-26 19:06:55.133054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.834 [2024-11-26 19:06:55.294243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:28.834 [2024-11-26 19:06:55.294383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.834 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.094 BaseBdev2 00:17:29.094 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.094 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:29.094 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:29.094 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.095 [ 00:17:29.095 { 00:17:29.095 "name": "BaseBdev2", 00:17:29.095 "aliases": [ 00:17:29.095 "fa1010a6-94e8-4083-a533-487654351218" 00:17:29.095 ], 00:17:29.095 "product_name": "Malloc disk", 00:17:29.095 "block_size": 512, 00:17:29.095 "num_blocks": 65536, 00:17:29.095 "uuid": "fa1010a6-94e8-4083-a533-487654351218", 00:17:29.095 "assigned_rate_limits": { 00:17:29.095 "rw_ios_per_sec": 0, 00:17:29.095 "rw_mbytes_per_sec": 0, 00:17:29.095 "r_mbytes_per_sec": 0, 00:17:29.095 "w_mbytes_per_sec": 0 00:17:29.095 }, 00:17:29.095 "claimed": false, 00:17:29.095 "zoned": false, 00:17:29.095 "supported_io_types": { 00:17:29.095 "read": true, 00:17:29.095 "write": true, 00:17:29.095 "unmap": true, 00:17:29.095 "flush": true, 00:17:29.095 "reset": true, 00:17:29.095 "nvme_admin": false, 00:17:29.095 "nvme_io": false, 00:17:29.095 "nvme_io_md": false, 00:17:29.095 "write_zeroes": true, 00:17:29.095 "zcopy": true, 00:17:29.095 "get_zone_info": false, 00:17:29.095 "zone_management": false, 00:17:29.095 "zone_append": false, 00:17:29.095 "compare": false, 00:17:29.095 "compare_and_write": false, 00:17:29.095 "abort": true, 00:17:29.095 "seek_hole": false, 00:17:29.095 "seek_data": false, 00:17:29.095 "copy": true, 00:17:29.095 "nvme_iov_md": false 00:17:29.095 }, 00:17:29.095 "memory_domains": [ 00:17:29.095 { 00:17:29.095 "dma_device_id": "system", 00:17:29.095 "dma_device_type": 1 00:17:29.095 }, 00:17:29.095 { 00:17:29.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.095 "dma_device_type": 2 00:17:29.095 } 00:17:29.095 ], 00:17:29.095 "driver_specific": {} 00:17:29.095 } 00:17:29.095 ] 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.095 BaseBdev3 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.095 [ 00:17:29.095 { 00:17:29.095 "name": "BaseBdev3", 00:17:29.095 "aliases": [ 00:17:29.095 "1aec7f55-db14-4618-93bc-b00a2a32322b" 00:17:29.095 ], 00:17:29.095 "product_name": "Malloc disk", 00:17:29.095 "block_size": 512, 00:17:29.095 "num_blocks": 65536, 00:17:29.095 "uuid": "1aec7f55-db14-4618-93bc-b00a2a32322b", 00:17:29.095 "assigned_rate_limits": { 00:17:29.095 "rw_ios_per_sec": 0, 00:17:29.095 "rw_mbytes_per_sec": 0, 00:17:29.095 "r_mbytes_per_sec": 0, 00:17:29.095 "w_mbytes_per_sec": 0 00:17:29.095 }, 00:17:29.095 "claimed": false, 00:17:29.095 "zoned": false, 00:17:29.095 "supported_io_types": { 00:17:29.095 "read": true, 00:17:29.095 "write": true, 00:17:29.095 "unmap": true, 00:17:29.095 "flush": true, 00:17:29.095 "reset": true, 00:17:29.095 "nvme_admin": false, 00:17:29.095 "nvme_io": false, 00:17:29.095 "nvme_io_md": false, 00:17:29.095 "write_zeroes": true, 00:17:29.095 "zcopy": true, 00:17:29.095 "get_zone_info": false, 00:17:29.095 "zone_management": false, 00:17:29.095 "zone_append": false, 00:17:29.095 "compare": false, 00:17:29.095 "compare_and_write": false, 00:17:29.095 "abort": true, 00:17:29.095 "seek_hole": false, 00:17:29.095 "seek_data": false, 00:17:29.095 "copy": true, 00:17:29.095 "nvme_iov_md": false 00:17:29.095 }, 00:17:29.095 "memory_domains": [ 00:17:29.095 { 00:17:29.095 "dma_device_id": "system", 00:17:29.095 "dma_device_type": 1 00:17:29.095 }, 00:17:29.095 { 00:17:29.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.095 "dma_device_type": 2 00:17:29.095 } 00:17:29.095 ], 00:17:29.095 "driver_specific": {} 00:17:29.095 } 00:17:29.095 ] 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.095 BaseBdev4 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.095 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.095 [ 00:17:29.095 { 00:17:29.095 "name": "BaseBdev4", 00:17:29.095 "aliases": [ 00:17:29.095 "c5b76c5d-d210-469a-a3a4-9b98adc25220" 00:17:29.095 ], 00:17:29.095 "product_name": "Malloc disk", 00:17:29.095 "block_size": 512, 00:17:29.095 "num_blocks": 65536, 00:17:29.095 "uuid": "c5b76c5d-d210-469a-a3a4-9b98adc25220", 00:17:29.095 "assigned_rate_limits": { 00:17:29.095 "rw_ios_per_sec": 0, 00:17:29.095 "rw_mbytes_per_sec": 0, 00:17:29.095 "r_mbytes_per_sec": 0, 00:17:29.095 "w_mbytes_per_sec": 0 00:17:29.095 }, 00:17:29.095 "claimed": false, 00:17:29.095 "zoned": false, 00:17:29.095 "supported_io_types": { 00:17:29.095 "read": true, 00:17:29.095 "write": true, 00:17:29.095 "unmap": true, 00:17:29.095 "flush": true, 00:17:29.095 "reset": true, 00:17:29.095 "nvme_admin": false, 00:17:29.095 "nvme_io": false, 00:17:29.095 "nvme_io_md": false, 00:17:29.095 "write_zeroes": true, 00:17:29.095 "zcopy": true, 00:17:29.095 "get_zone_info": false, 00:17:29.095 "zone_management": false, 00:17:29.095 "zone_append": false, 00:17:29.095 "compare": false, 00:17:29.095 "compare_and_write": false, 00:17:29.095 "abort": true, 00:17:29.095 "seek_hole": false, 00:17:29.095 "seek_data": false, 00:17:29.095 "copy": true, 00:17:29.095 "nvme_iov_md": false 00:17:29.095 }, 00:17:29.095 "memory_domains": [ 00:17:29.095 { 00:17:29.095 "dma_device_id": "system", 00:17:29.095 "dma_device_type": 1 00:17:29.096 }, 00:17:29.096 { 00:17:29.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.096 "dma_device_type": 2 00:17:29.096 } 00:17:29.096 ], 00:17:29.096 "driver_specific": {} 00:17:29.096 } 00:17:29.096 ] 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.096 [2024-11-26 19:06:55.672077] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:29.096 [2024-11-26 19:06:55.672207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:29.096 [2024-11-26 19:06:55.672265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:29.096 [2024-11-26 19:06:55.674944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:29.096 [2024-11-26 19:06:55.675022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.096 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.355 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.355 "name": "Existed_Raid", 00:17:29.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.355 "strip_size_kb": 64, 00:17:29.355 "state": "configuring", 00:17:29.355 "raid_level": "raid5f", 00:17:29.355 "superblock": false, 00:17:29.355 "num_base_bdevs": 4, 00:17:29.355 "num_base_bdevs_discovered": 3, 00:17:29.355 "num_base_bdevs_operational": 4, 00:17:29.355 "base_bdevs_list": [ 00:17:29.355 { 00:17:29.355 "name": "BaseBdev1", 00:17:29.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.356 "is_configured": false, 00:17:29.356 "data_offset": 0, 00:17:29.356 "data_size": 0 00:17:29.356 }, 00:17:29.356 { 00:17:29.356 "name": "BaseBdev2", 00:17:29.356 "uuid": "fa1010a6-94e8-4083-a533-487654351218", 00:17:29.356 "is_configured": true, 00:17:29.356 "data_offset": 0, 00:17:29.356 "data_size": 65536 00:17:29.356 }, 00:17:29.356 { 00:17:29.356 "name": "BaseBdev3", 00:17:29.356 "uuid": "1aec7f55-db14-4618-93bc-b00a2a32322b", 00:17:29.356 "is_configured": true, 00:17:29.356 "data_offset": 0, 00:17:29.356 "data_size": 65536 00:17:29.356 }, 00:17:29.356 { 00:17:29.356 "name": "BaseBdev4", 00:17:29.356 "uuid": "c5b76c5d-d210-469a-a3a4-9b98adc25220", 00:17:29.356 "is_configured": true, 00:17:29.356 "data_offset": 0, 00:17:29.356 "data_size": 65536 00:17:29.356 } 00:17:29.356 ] 00:17:29.356 }' 00:17:29.356 19:06:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.356 19:06:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.616 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:29.616 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.616 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.616 [2024-11-26 19:06:56.200364] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:29.616 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.616 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:29.616 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:29.616 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.616 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:29.616 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.616 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:29.616 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.616 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.616 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.616 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.616 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.616 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.616 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.616 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.616 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.875 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.875 "name": "Existed_Raid", 00:17:29.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.875 "strip_size_kb": 64, 00:17:29.875 "state": "configuring", 00:17:29.875 "raid_level": "raid5f", 00:17:29.875 "superblock": false, 00:17:29.875 "num_base_bdevs": 4, 00:17:29.875 "num_base_bdevs_discovered": 2, 00:17:29.875 "num_base_bdevs_operational": 4, 00:17:29.875 "base_bdevs_list": [ 00:17:29.875 { 00:17:29.875 "name": "BaseBdev1", 00:17:29.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.875 "is_configured": false, 00:17:29.875 "data_offset": 0, 00:17:29.875 "data_size": 0 00:17:29.875 }, 00:17:29.875 { 00:17:29.875 "name": null, 00:17:29.875 "uuid": "fa1010a6-94e8-4083-a533-487654351218", 00:17:29.875 "is_configured": false, 00:17:29.875 "data_offset": 0, 00:17:29.875 "data_size": 65536 00:17:29.875 }, 00:17:29.875 { 00:17:29.875 "name": "BaseBdev3", 00:17:29.875 "uuid": "1aec7f55-db14-4618-93bc-b00a2a32322b", 00:17:29.875 "is_configured": true, 00:17:29.875 "data_offset": 0, 00:17:29.875 "data_size": 65536 00:17:29.875 }, 00:17:29.875 { 00:17:29.875 "name": "BaseBdev4", 00:17:29.875 "uuid": "c5b76c5d-d210-469a-a3a4-9b98adc25220", 00:17:29.875 "is_configured": true, 00:17:29.875 "data_offset": 0, 00:17:29.875 "data_size": 65536 00:17:29.875 } 00:17:29.875 ] 00:17:29.875 }' 00:17:29.875 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.875 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.443 [2024-11-26 19:06:56.843982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.443 BaseBdev1 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.443 [ 00:17:30.443 { 00:17:30.443 "name": "BaseBdev1", 00:17:30.443 "aliases": [ 00:17:30.443 "9f335b98-8583-45e9-bcff-0b76164067ae" 00:17:30.443 ], 00:17:30.443 "product_name": "Malloc disk", 00:17:30.443 "block_size": 512, 00:17:30.443 "num_blocks": 65536, 00:17:30.443 "uuid": "9f335b98-8583-45e9-bcff-0b76164067ae", 00:17:30.443 "assigned_rate_limits": { 00:17:30.443 "rw_ios_per_sec": 0, 00:17:30.443 "rw_mbytes_per_sec": 0, 00:17:30.443 "r_mbytes_per_sec": 0, 00:17:30.443 "w_mbytes_per_sec": 0 00:17:30.443 }, 00:17:30.443 "claimed": true, 00:17:30.443 "claim_type": "exclusive_write", 00:17:30.443 "zoned": false, 00:17:30.443 "supported_io_types": { 00:17:30.443 "read": true, 00:17:30.443 "write": true, 00:17:30.443 "unmap": true, 00:17:30.443 "flush": true, 00:17:30.443 "reset": true, 00:17:30.443 "nvme_admin": false, 00:17:30.443 "nvme_io": false, 00:17:30.443 "nvme_io_md": false, 00:17:30.443 "write_zeroes": true, 00:17:30.443 "zcopy": true, 00:17:30.443 "get_zone_info": false, 00:17:30.443 "zone_management": false, 00:17:30.443 "zone_append": false, 00:17:30.443 "compare": false, 00:17:30.443 "compare_and_write": false, 00:17:30.443 "abort": true, 00:17:30.443 "seek_hole": false, 00:17:30.443 "seek_data": false, 00:17:30.443 "copy": true, 00:17:30.443 "nvme_iov_md": false 00:17:30.443 }, 00:17:30.443 "memory_domains": [ 00:17:30.443 { 00:17:30.443 "dma_device_id": "system", 00:17:30.443 "dma_device_type": 1 00:17:30.443 }, 00:17:30.443 { 00:17:30.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.443 "dma_device_type": 2 00:17:30.443 } 00:17:30.443 ], 00:17:30.443 "driver_specific": {} 00:17:30.443 } 00:17:30.443 ] 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.443 "name": "Existed_Raid", 00:17:30.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.443 "strip_size_kb": 64, 00:17:30.443 "state": "configuring", 00:17:30.443 "raid_level": "raid5f", 00:17:30.443 "superblock": false, 00:17:30.443 "num_base_bdevs": 4, 00:17:30.443 "num_base_bdevs_discovered": 3, 00:17:30.443 "num_base_bdevs_operational": 4, 00:17:30.443 "base_bdevs_list": [ 00:17:30.443 { 00:17:30.443 "name": "BaseBdev1", 00:17:30.443 "uuid": "9f335b98-8583-45e9-bcff-0b76164067ae", 00:17:30.443 "is_configured": true, 00:17:30.443 "data_offset": 0, 00:17:30.443 "data_size": 65536 00:17:30.443 }, 00:17:30.443 { 00:17:30.443 "name": null, 00:17:30.443 "uuid": "fa1010a6-94e8-4083-a533-487654351218", 00:17:30.443 "is_configured": false, 00:17:30.443 "data_offset": 0, 00:17:30.443 "data_size": 65536 00:17:30.443 }, 00:17:30.443 { 00:17:30.443 "name": "BaseBdev3", 00:17:30.443 "uuid": "1aec7f55-db14-4618-93bc-b00a2a32322b", 00:17:30.443 "is_configured": true, 00:17:30.443 "data_offset": 0, 00:17:30.443 "data_size": 65536 00:17:30.443 }, 00:17:30.443 { 00:17:30.443 "name": "BaseBdev4", 00:17:30.443 "uuid": "c5b76c5d-d210-469a-a3a4-9b98adc25220", 00:17:30.443 "is_configured": true, 00:17:30.443 "data_offset": 0, 00:17:30.443 "data_size": 65536 00:17:30.443 } 00:17:30.443 ] 00:17:30.443 }' 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.443 19:06:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.012 [2024-11-26 19:06:57.512427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.012 "name": "Existed_Raid", 00:17:31.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.012 "strip_size_kb": 64, 00:17:31.012 "state": "configuring", 00:17:31.012 "raid_level": "raid5f", 00:17:31.012 "superblock": false, 00:17:31.012 "num_base_bdevs": 4, 00:17:31.012 "num_base_bdevs_discovered": 2, 00:17:31.012 "num_base_bdevs_operational": 4, 00:17:31.012 "base_bdevs_list": [ 00:17:31.012 { 00:17:31.012 "name": "BaseBdev1", 00:17:31.012 "uuid": "9f335b98-8583-45e9-bcff-0b76164067ae", 00:17:31.012 "is_configured": true, 00:17:31.012 "data_offset": 0, 00:17:31.012 "data_size": 65536 00:17:31.012 }, 00:17:31.012 { 00:17:31.012 "name": null, 00:17:31.012 "uuid": "fa1010a6-94e8-4083-a533-487654351218", 00:17:31.012 "is_configured": false, 00:17:31.012 "data_offset": 0, 00:17:31.012 "data_size": 65536 00:17:31.012 }, 00:17:31.012 { 00:17:31.012 "name": null, 00:17:31.012 "uuid": "1aec7f55-db14-4618-93bc-b00a2a32322b", 00:17:31.012 "is_configured": false, 00:17:31.012 "data_offset": 0, 00:17:31.012 "data_size": 65536 00:17:31.012 }, 00:17:31.012 { 00:17:31.012 "name": "BaseBdev4", 00:17:31.012 "uuid": "c5b76c5d-d210-469a-a3a4-9b98adc25220", 00:17:31.012 "is_configured": true, 00:17:31.012 "data_offset": 0, 00:17:31.012 "data_size": 65536 00:17:31.012 } 00:17:31.012 ] 00:17:31.012 }' 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.012 19:06:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.579 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:31.579 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.579 19:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.579 19:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.579 19:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.579 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:31.580 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:31.580 19:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.580 19:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.580 [2024-11-26 19:06:58.140555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:31.580 19:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.580 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:31.580 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.580 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.580 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.580 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.580 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:31.580 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.580 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.580 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.580 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.580 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.580 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.580 19:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.580 19:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.580 19:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.580 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.580 "name": "Existed_Raid", 00:17:31.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.580 "strip_size_kb": 64, 00:17:31.580 "state": "configuring", 00:17:31.580 "raid_level": "raid5f", 00:17:31.580 "superblock": false, 00:17:31.580 "num_base_bdevs": 4, 00:17:31.580 "num_base_bdevs_discovered": 3, 00:17:31.580 "num_base_bdevs_operational": 4, 00:17:31.580 "base_bdevs_list": [ 00:17:31.580 { 00:17:31.580 "name": "BaseBdev1", 00:17:31.580 "uuid": "9f335b98-8583-45e9-bcff-0b76164067ae", 00:17:31.580 "is_configured": true, 00:17:31.580 "data_offset": 0, 00:17:31.580 "data_size": 65536 00:17:31.580 }, 00:17:31.580 { 00:17:31.580 "name": null, 00:17:31.580 "uuid": "fa1010a6-94e8-4083-a533-487654351218", 00:17:31.580 "is_configured": false, 00:17:31.580 "data_offset": 0, 00:17:31.580 "data_size": 65536 00:17:31.580 }, 00:17:31.580 { 00:17:31.580 "name": "BaseBdev3", 00:17:31.580 "uuid": "1aec7f55-db14-4618-93bc-b00a2a32322b", 00:17:31.580 "is_configured": true, 00:17:31.580 "data_offset": 0, 00:17:31.580 "data_size": 65536 00:17:31.580 }, 00:17:31.580 { 00:17:31.580 "name": "BaseBdev4", 00:17:31.580 "uuid": "c5b76c5d-d210-469a-a3a4-9b98adc25220", 00:17:31.580 "is_configured": true, 00:17:31.580 "data_offset": 0, 00:17:31.580 "data_size": 65536 00:17:31.580 } 00:17:31.580 ] 00:17:31.580 }' 00:17:31.580 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.580 19:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.157 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.157 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:32.157 19:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.157 19:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.157 19:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.157 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:32.157 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:32.157 19:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.157 19:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.157 [2024-11-26 19:06:58.744864] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:32.425 19:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.425 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:32.425 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.425 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:32.425 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:32.425 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.425 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:32.425 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.425 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.425 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.425 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.425 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.425 19:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.425 19:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.425 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.425 19:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.425 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.425 "name": "Existed_Raid", 00:17:32.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.425 "strip_size_kb": 64, 00:17:32.425 "state": "configuring", 00:17:32.425 "raid_level": "raid5f", 00:17:32.425 "superblock": false, 00:17:32.425 "num_base_bdevs": 4, 00:17:32.425 "num_base_bdevs_discovered": 2, 00:17:32.425 "num_base_bdevs_operational": 4, 00:17:32.425 "base_bdevs_list": [ 00:17:32.425 { 00:17:32.425 "name": null, 00:17:32.425 "uuid": "9f335b98-8583-45e9-bcff-0b76164067ae", 00:17:32.425 "is_configured": false, 00:17:32.425 "data_offset": 0, 00:17:32.425 "data_size": 65536 00:17:32.425 }, 00:17:32.425 { 00:17:32.425 "name": null, 00:17:32.426 "uuid": "fa1010a6-94e8-4083-a533-487654351218", 00:17:32.426 "is_configured": false, 00:17:32.426 "data_offset": 0, 00:17:32.426 "data_size": 65536 00:17:32.426 }, 00:17:32.426 { 00:17:32.426 "name": "BaseBdev3", 00:17:32.426 "uuid": "1aec7f55-db14-4618-93bc-b00a2a32322b", 00:17:32.426 "is_configured": true, 00:17:32.426 "data_offset": 0, 00:17:32.426 "data_size": 65536 00:17:32.426 }, 00:17:32.426 { 00:17:32.426 "name": "BaseBdev4", 00:17:32.426 "uuid": "c5b76c5d-d210-469a-a3a4-9b98adc25220", 00:17:32.426 "is_configured": true, 00:17:32.426 "data_offset": 0, 00:17:32.426 "data_size": 65536 00:17:32.426 } 00:17:32.426 ] 00:17:32.426 }' 00:17:32.426 19:06:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.426 19:06:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.993 [2024-11-26 19:06:59.459674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.993 19:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.993 "name": "Existed_Raid", 00:17:32.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.993 "strip_size_kb": 64, 00:17:32.993 "state": "configuring", 00:17:32.993 "raid_level": "raid5f", 00:17:32.993 "superblock": false, 00:17:32.993 "num_base_bdevs": 4, 00:17:32.993 "num_base_bdevs_discovered": 3, 00:17:32.993 "num_base_bdevs_operational": 4, 00:17:32.993 "base_bdevs_list": [ 00:17:32.993 { 00:17:32.993 "name": null, 00:17:32.993 "uuid": "9f335b98-8583-45e9-bcff-0b76164067ae", 00:17:32.993 "is_configured": false, 00:17:32.993 "data_offset": 0, 00:17:32.993 "data_size": 65536 00:17:32.993 }, 00:17:32.993 { 00:17:32.993 "name": "BaseBdev2", 00:17:32.993 "uuid": "fa1010a6-94e8-4083-a533-487654351218", 00:17:32.993 "is_configured": true, 00:17:32.993 "data_offset": 0, 00:17:32.993 "data_size": 65536 00:17:32.993 }, 00:17:32.993 { 00:17:32.993 "name": "BaseBdev3", 00:17:32.993 "uuid": "1aec7f55-db14-4618-93bc-b00a2a32322b", 00:17:32.993 "is_configured": true, 00:17:32.993 "data_offset": 0, 00:17:32.994 "data_size": 65536 00:17:32.994 }, 00:17:32.994 { 00:17:32.994 "name": "BaseBdev4", 00:17:32.994 "uuid": "c5b76c5d-d210-469a-a3a4-9b98adc25220", 00:17:32.994 "is_configured": true, 00:17:32.994 "data_offset": 0, 00:17:32.994 "data_size": 65536 00:17:32.994 } 00:17:32.994 ] 00:17:32.994 }' 00:17:32.994 19:06:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.994 19:06:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.561 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.561 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.561 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:33.561 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.561 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.561 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:33.562 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.562 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.562 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:33.562 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.562 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.562 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9f335b98-8583-45e9-bcff-0b76164067ae 00:17:33.562 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.562 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.562 [2024-11-26 19:07:00.168066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:33.562 [2024-11-26 19:07:00.168153] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:33.562 [2024-11-26 19:07:00.168168] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:33.562 [2024-11-26 19:07:00.168557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:33.562 [2024-11-26 19:07:00.175226] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:33.562 [2024-11-26 19:07:00.175286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:33.562 [2024-11-26 19:07:00.175616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.562 NewBaseBdev 00:17:33.562 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.562 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:33.562 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:33.562 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:33.562 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:33.562 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:33.562 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:33.562 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:33.562 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.562 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.822 [ 00:17:33.822 { 00:17:33.822 "name": "NewBaseBdev", 00:17:33.822 "aliases": [ 00:17:33.822 "9f335b98-8583-45e9-bcff-0b76164067ae" 00:17:33.822 ], 00:17:33.822 "product_name": "Malloc disk", 00:17:33.822 "block_size": 512, 00:17:33.822 "num_blocks": 65536, 00:17:33.822 "uuid": "9f335b98-8583-45e9-bcff-0b76164067ae", 00:17:33.822 "assigned_rate_limits": { 00:17:33.822 "rw_ios_per_sec": 0, 00:17:33.822 "rw_mbytes_per_sec": 0, 00:17:33.822 "r_mbytes_per_sec": 0, 00:17:33.822 "w_mbytes_per_sec": 0 00:17:33.822 }, 00:17:33.822 "claimed": true, 00:17:33.822 "claim_type": "exclusive_write", 00:17:33.822 "zoned": false, 00:17:33.822 "supported_io_types": { 00:17:33.822 "read": true, 00:17:33.822 "write": true, 00:17:33.822 "unmap": true, 00:17:33.822 "flush": true, 00:17:33.822 "reset": true, 00:17:33.822 "nvme_admin": false, 00:17:33.822 "nvme_io": false, 00:17:33.822 "nvme_io_md": false, 00:17:33.822 "write_zeroes": true, 00:17:33.822 "zcopy": true, 00:17:33.822 "get_zone_info": false, 00:17:33.822 "zone_management": false, 00:17:33.822 "zone_append": false, 00:17:33.822 "compare": false, 00:17:33.822 "compare_and_write": false, 00:17:33.822 "abort": true, 00:17:33.822 "seek_hole": false, 00:17:33.822 "seek_data": false, 00:17:33.822 "copy": true, 00:17:33.822 "nvme_iov_md": false 00:17:33.822 }, 00:17:33.822 "memory_domains": [ 00:17:33.822 { 00:17:33.822 "dma_device_id": "system", 00:17:33.822 "dma_device_type": 1 00:17:33.822 }, 00:17:33.822 { 00:17:33.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.822 "dma_device_type": 2 00:17:33.822 } 00:17:33.822 ], 00:17:33.822 "driver_specific": {} 00:17:33.822 } 00:17:33.822 ] 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.822 "name": "Existed_Raid", 00:17:33.822 "uuid": "842ef4d4-65ef-442d-b59c-16135560fef0", 00:17:33.822 "strip_size_kb": 64, 00:17:33.822 "state": "online", 00:17:33.822 "raid_level": "raid5f", 00:17:33.822 "superblock": false, 00:17:33.822 "num_base_bdevs": 4, 00:17:33.822 "num_base_bdevs_discovered": 4, 00:17:33.822 "num_base_bdevs_operational": 4, 00:17:33.822 "base_bdevs_list": [ 00:17:33.822 { 00:17:33.822 "name": "NewBaseBdev", 00:17:33.822 "uuid": "9f335b98-8583-45e9-bcff-0b76164067ae", 00:17:33.822 "is_configured": true, 00:17:33.822 "data_offset": 0, 00:17:33.822 "data_size": 65536 00:17:33.822 }, 00:17:33.822 { 00:17:33.822 "name": "BaseBdev2", 00:17:33.822 "uuid": "fa1010a6-94e8-4083-a533-487654351218", 00:17:33.822 "is_configured": true, 00:17:33.822 "data_offset": 0, 00:17:33.822 "data_size": 65536 00:17:33.822 }, 00:17:33.822 { 00:17:33.822 "name": "BaseBdev3", 00:17:33.822 "uuid": "1aec7f55-db14-4618-93bc-b00a2a32322b", 00:17:33.822 "is_configured": true, 00:17:33.822 "data_offset": 0, 00:17:33.822 "data_size": 65536 00:17:33.822 }, 00:17:33.822 { 00:17:33.822 "name": "BaseBdev4", 00:17:33.822 "uuid": "c5b76c5d-d210-469a-a3a4-9b98adc25220", 00:17:33.822 "is_configured": true, 00:17:33.822 "data_offset": 0, 00:17:33.822 "data_size": 65536 00:17:33.822 } 00:17:33.822 ] 00:17:33.822 }' 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.822 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.391 [2024-11-26 19:07:00.788022] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:34.391 "name": "Existed_Raid", 00:17:34.391 "aliases": [ 00:17:34.391 "842ef4d4-65ef-442d-b59c-16135560fef0" 00:17:34.391 ], 00:17:34.391 "product_name": "Raid Volume", 00:17:34.391 "block_size": 512, 00:17:34.391 "num_blocks": 196608, 00:17:34.391 "uuid": "842ef4d4-65ef-442d-b59c-16135560fef0", 00:17:34.391 "assigned_rate_limits": { 00:17:34.391 "rw_ios_per_sec": 0, 00:17:34.391 "rw_mbytes_per_sec": 0, 00:17:34.391 "r_mbytes_per_sec": 0, 00:17:34.391 "w_mbytes_per_sec": 0 00:17:34.391 }, 00:17:34.391 "claimed": false, 00:17:34.391 "zoned": false, 00:17:34.391 "supported_io_types": { 00:17:34.391 "read": true, 00:17:34.391 "write": true, 00:17:34.391 "unmap": false, 00:17:34.391 "flush": false, 00:17:34.391 "reset": true, 00:17:34.391 "nvme_admin": false, 00:17:34.391 "nvme_io": false, 00:17:34.391 "nvme_io_md": false, 00:17:34.391 "write_zeroes": true, 00:17:34.391 "zcopy": false, 00:17:34.391 "get_zone_info": false, 00:17:34.391 "zone_management": false, 00:17:34.391 "zone_append": false, 00:17:34.391 "compare": false, 00:17:34.391 "compare_and_write": false, 00:17:34.391 "abort": false, 00:17:34.391 "seek_hole": false, 00:17:34.391 "seek_data": false, 00:17:34.391 "copy": false, 00:17:34.391 "nvme_iov_md": false 00:17:34.391 }, 00:17:34.391 "driver_specific": { 00:17:34.391 "raid": { 00:17:34.391 "uuid": "842ef4d4-65ef-442d-b59c-16135560fef0", 00:17:34.391 "strip_size_kb": 64, 00:17:34.391 "state": "online", 00:17:34.391 "raid_level": "raid5f", 00:17:34.391 "superblock": false, 00:17:34.391 "num_base_bdevs": 4, 00:17:34.391 "num_base_bdevs_discovered": 4, 00:17:34.391 "num_base_bdevs_operational": 4, 00:17:34.391 "base_bdevs_list": [ 00:17:34.391 { 00:17:34.391 "name": "NewBaseBdev", 00:17:34.391 "uuid": "9f335b98-8583-45e9-bcff-0b76164067ae", 00:17:34.391 "is_configured": true, 00:17:34.391 "data_offset": 0, 00:17:34.391 "data_size": 65536 00:17:34.391 }, 00:17:34.391 { 00:17:34.391 "name": "BaseBdev2", 00:17:34.391 "uuid": "fa1010a6-94e8-4083-a533-487654351218", 00:17:34.391 "is_configured": true, 00:17:34.391 "data_offset": 0, 00:17:34.391 "data_size": 65536 00:17:34.391 }, 00:17:34.391 { 00:17:34.391 "name": "BaseBdev3", 00:17:34.391 "uuid": "1aec7f55-db14-4618-93bc-b00a2a32322b", 00:17:34.391 "is_configured": true, 00:17:34.391 "data_offset": 0, 00:17:34.391 "data_size": 65536 00:17:34.391 }, 00:17:34.391 { 00:17:34.391 "name": "BaseBdev4", 00:17:34.391 "uuid": "c5b76c5d-d210-469a-a3a4-9b98adc25220", 00:17:34.391 "is_configured": true, 00:17:34.391 "data_offset": 0, 00:17:34.391 "data_size": 65536 00:17:34.391 } 00:17:34.391 ] 00:17:34.391 } 00:17:34.391 } 00:17:34.391 }' 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:34.391 BaseBdev2 00:17:34.391 BaseBdev3 00:17:34.391 BaseBdev4' 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:34.391 19:07:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.391 19:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:34.391 19:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.391 19:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.391 19:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.651 [2024-11-26 19:07:01.175707] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:34.651 [2024-11-26 19:07:01.175768] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:34.651 [2024-11-26 19:07:01.175897] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.651 [2024-11-26 19:07:01.176319] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:34.651 [2024-11-26 19:07:01.176337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83665 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83665 ']' 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83665 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83665 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83665' 00:17:34.651 killing process with pid 83665 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83665 00:17:34.651 [2024-11-26 19:07:01.219068] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:34.651 19:07:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83665 00:17:35.220 [2024-11-26 19:07:01.578504] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:36.156 00:17:36.156 real 0m13.355s 00:17:36.156 user 0m21.958s 00:17:36.156 sys 0m2.044s 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.156 ************************************ 00:17:36.156 END TEST raid5f_state_function_test 00:17:36.156 ************************************ 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.156 19:07:02 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:17:36.156 19:07:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:36.156 19:07:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.156 19:07:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:36.156 ************************************ 00:17:36.156 START TEST raid5f_state_function_test_sb 00:17:36.156 ************************************ 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84349 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84349' 00:17:36.156 Process raid pid: 84349 00:17:36.156 19:07:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84349 00:17:36.157 19:07:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84349 ']' 00:17:36.157 19:07:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.157 19:07:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.157 19:07:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.157 19:07:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.157 19:07:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.416 [2024-11-26 19:07:02.867321] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:17:36.416 [2024-11-26 19:07:02.867510] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.675 [2024-11-26 19:07:03.046221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.675 [2024-11-26 19:07:03.181889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.935 [2024-11-26 19:07:03.398066] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.935 [2024-11-26 19:07:03.398141] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:37.528 19:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.528 19:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:37.528 19:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:37.528 19:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.528 19:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.528 [2024-11-26 19:07:03.894856] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:37.528 [2024-11-26 19:07:03.894955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:37.528 [2024-11-26 19:07:03.894984] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:37.528 [2024-11-26 19:07:03.895002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:37.528 [2024-11-26 19:07:03.895012] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:37.528 [2024-11-26 19:07:03.895027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:37.528 [2024-11-26 19:07:03.895037] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:37.528 [2024-11-26 19:07:03.895052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:37.528 19:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.528 19:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:37.528 19:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.528 19:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.528 19:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.528 19:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.528 19:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:37.528 19:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.528 19:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.528 19:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.528 19:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.528 19:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.528 19:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.528 19:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.528 19:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.528 19:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.528 19:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.528 "name": "Existed_Raid", 00:17:37.528 "uuid": "8be88ebb-1467-486a-b62b-6f50f39c288f", 00:17:37.528 "strip_size_kb": 64, 00:17:37.528 "state": "configuring", 00:17:37.528 "raid_level": "raid5f", 00:17:37.528 "superblock": true, 00:17:37.528 "num_base_bdevs": 4, 00:17:37.528 "num_base_bdevs_discovered": 0, 00:17:37.528 "num_base_bdevs_operational": 4, 00:17:37.528 "base_bdevs_list": [ 00:17:37.529 { 00:17:37.529 "name": "BaseBdev1", 00:17:37.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.529 "is_configured": false, 00:17:37.529 "data_offset": 0, 00:17:37.529 "data_size": 0 00:17:37.529 }, 00:17:37.529 { 00:17:37.529 "name": "BaseBdev2", 00:17:37.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.529 "is_configured": false, 00:17:37.529 "data_offset": 0, 00:17:37.529 "data_size": 0 00:17:37.529 }, 00:17:37.529 { 00:17:37.529 "name": "BaseBdev3", 00:17:37.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.529 "is_configured": false, 00:17:37.529 "data_offset": 0, 00:17:37.529 "data_size": 0 00:17:37.529 }, 00:17:37.529 { 00:17:37.529 "name": "BaseBdev4", 00:17:37.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.529 "is_configured": false, 00:17:37.529 "data_offset": 0, 00:17:37.529 "data_size": 0 00:17:37.529 } 00:17:37.529 ] 00:17:37.529 }' 00:17:37.529 19:07:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.529 19:07:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.096 19:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:38.096 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.096 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.096 [2024-11-26 19:07:04.431022] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:38.096 [2024-11-26 19:07:04.431110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:38.096 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.096 19:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:38.096 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.096 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.096 [2024-11-26 19:07:04.438993] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:38.096 [2024-11-26 19:07:04.439074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:38.096 [2024-11-26 19:07:04.439091] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:38.096 [2024-11-26 19:07:04.439108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:38.096 [2024-11-26 19:07:04.439118] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:38.097 [2024-11-26 19:07:04.439133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:38.097 [2024-11-26 19:07:04.439143] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:38.097 [2024-11-26 19:07:04.439162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.097 [2024-11-26 19:07:04.486430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:38.097 BaseBdev1 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.097 [ 00:17:38.097 { 00:17:38.097 "name": "BaseBdev1", 00:17:38.097 "aliases": [ 00:17:38.097 "e7560d43-b549-4353-9908-28dd58b58520" 00:17:38.097 ], 00:17:38.097 "product_name": "Malloc disk", 00:17:38.097 "block_size": 512, 00:17:38.097 "num_blocks": 65536, 00:17:38.097 "uuid": "e7560d43-b549-4353-9908-28dd58b58520", 00:17:38.097 "assigned_rate_limits": { 00:17:38.097 "rw_ios_per_sec": 0, 00:17:38.097 "rw_mbytes_per_sec": 0, 00:17:38.097 "r_mbytes_per_sec": 0, 00:17:38.097 "w_mbytes_per_sec": 0 00:17:38.097 }, 00:17:38.097 "claimed": true, 00:17:38.097 "claim_type": "exclusive_write", 00:17:38.097 "zoned": false, 00:17:38.097 "supported_io_types": { 00:17:38.097 "read": true, 00:17:38.097 "write": true, 00:17:38.097 "unmap": true, 00:17:38.097 "flush": true, 00:17:38.097 "reset": true, 00:17:38.097 "nvme_admin": false, 00:17:38.097 "nvme_io": false, 00:17:38.097 "nvme_io_md": false, 00:17:38.097 "write_zeroes": true, 00:17:38.097 "zcopy": true, 00:17:38.097 "get_zone_info": false, 00:17:38.097 "zone_management": false, 00:17:38.097 "zone_append": false, 00:17:38.097 "compare": false, 00:17:38.097 "compare_and_write": false, 00:17:38.097 "abort": true, 00:17:38.097 "seek_hole": false, 00:17:38.097 "seek_data": false, 00:17:38.097 "copy": true, 00:17:38.097 "nvme_iov_md": false 00:17:38.097 }, 00:17:38.097 "memory_domains": [ 00:17:38.097 { 00:17:38.097 "dma_device_id": "system", 00:17:38.097 "dma_device_type": 1 00:17:38.097 }, 00:17:38.097 { 00:17:38.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.097 "dma_device_type": 2 00:17:38.097 } 00:17:38.097 ], 00:17:38.097 "driver_specific": {} 00:17:38.097 } 00:17:38.097 ] 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.097 "name": "Existed_Raid", 00:17:38.097 "uuid": "3e93146d-3d14-43bf-9684-c967f580fd45", 00:17:38.097 "strip_size_kb": 64, 00:17:38.097 "state": "configuring", 00:17:38.097 "raid_level": "raid5f", 00:17:38.097 "superblock": true, 00:17:38.097 "num_base_bdevs": 4, 00:17:38.097 "num_base_bdevs_discovered": 1, 00:17:38.097 "num_base_bdevs_operational": 4, 00:17:38.097 "base_bdevs_list": [ 00:17:38.097 { 00:17:38.097 "name": "BaseBdev1", 00:17:38.097 "uuid": "e7560d43-b549-4353-9908-28dd58b58520", 00:17:38.097 "is_configured": true, 00:17:38.097 "data_offset": 2048, 00:17:38.097 "data_size": 63488 00:17:38.097 }, 00:17:38.097 { 00:17:38.097 "name": "BaseBdev2", 00:17:38.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.097 "is_configured": false, 00:17:38.097 "data_offset": 0, 00:17:38.097 "data_size": 0 00:17:38.097 }, 00:17:38.097 { 00:17:38.097 "name": "BaseBdev3", 00:17:38.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.097 "is_configured": false, 00:17:38.097 "data_offset": 0, 00:17:38.097 "data_size": 0 00:17:38.097 }, 00:17:38.097 { 00:17:38.097 "name": "BaseBdev4", 00:17:38.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.097 "is_configured": false, 00:17:38.097 "data_offset": 0, 00:17:38.097 "data_size": 0 00:17:38.097 } 00:17:38.097 ] 00:17:38.097 }' 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.097 19:07:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.665 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:38.665 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.665 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.665 [2024-11-26 19:07:05.058763] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:38.665 [2024-11-26 19:07:05.058894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:38.665 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.665 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:38.665 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.665 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.665 [2024-11-26 19:07:05.066816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:38.665 [2024-11-26 19:07:05.069578] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:38.665 [2024-11-26 19:07:05.069976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:38.665 [2024-11-26 19:07:05.070007] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:38.665 [2024-11-26 19:07:05.070028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:38.665 [2024-11-26 19:07:05.070039] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:38.665 [2024-11-26 19:07:05.070053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:38.665 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.665 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:38.665 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:38.666 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:38.666 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.666 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:38.666 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.666 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.666 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:38.666 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.666 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.666 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.666 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.666 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.666 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.666 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.666 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.666 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.666 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.666 "name": "Existed_Raid", 00:17:38.666 "uuid": "5c5c925a-79a6-4e9d-9db0-99b2895d1bd7", 00:17:38.666 "strip_size_kb": 64, 00:17:38.666 "state": "configuring", 00:17:38.666 "raid_level": "raid5f", 00:17:38.666 "superblock": true, 00:17:38.666 "num_base_bdevs": 4, 00:17:38.666 "num_base_bdevs_discovered": 1, 00:17:38.666 "num_base_bdevs_operational": 4, 00:17:38.666 "base_bdevs_list": [ 00:17:38.666 { 00:17:38.666 "name": "BaseBdev1", 00:17:38.666 "uuid": "e7560d43-b549-4353-9908-28dd58b58520", 00:17:38.666 "is_configured": true, 00:17:38.666 "data_offset": 2048, 00:17:38.666 "data_size": 63488 00:17:38.666 }, 00:17:38.666 { 00:17:38.666 "name": "BaseBdev2", 00:17:38.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.666 "is_configured": false, 00:17:38.666 "data_offset": 0, 00:17:38.666 "data_size": 0 00:17:38.666 }, 00:17:38.666 { 00:17:38.666 "name": "BaseBdev3", 00:17:38.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.666 "is_configured": false, 00:17:38.666 "data_offset": 0, 00:17:38.666 "data_size": 0 00:17:38.666 }, 00:17:38.666 { 00:17:38.666 "name": "BaseBdev4", 00:17:38.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.666 "is_configured": false, 00:17:38.666 "data_offset": 0, 00:17:38.666 "data_size": 0 00:17:38.666 } 00:17:38.666 ] 00:17:38.666 }' 00:17:38.666 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.666 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.234 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:39.234 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.234 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.234 [2024-11-26 19:07:05.650101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:39.234 BaseBdev2 00:17:39.234 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.234 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:39.234 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.235 [ 00:17:39.235 { 00:17:39.235 "name": "BaseBdev2", 00:17:39.235 "aliases": [ 00:17:39.235 "f8b2ead4-2680-446d-b230-2e2a98152a71" 00:17:39.235 ], 00:17:39.235 "product_name": "Malloc disk", 00:17:39.235 "block_size": 512, 00:17:39.235 "num_blocks": 65536, 00:17:39.235 "uuid": "f8b2ead4-2680-446d-b230-2e2a98152a71", 00:17:39.235 "assigned_rate_limits": { 00:17:39.235 "rw_ios_per_sec": 0, 00:17:39.235 "rw_mbytes_per_sec": 0, 00:17:39.235 "r_mbytes_per_sec": 0, 00:17:39.235 "w_mbytes_per_sec": 0 00:17:39.235 }, 00:17:39.235 "claimed": true, 00:17:39.235 "claim_type": "exclusive_write", 00:17:39.235 "zoned": false, 00:17:39.235 "supported_io_types": { 00:17:39.235 "read": true, 00:17:39.235 "write": true, 00:17:39.235 "unmap": true, 00:17:39.235 "flush": true, 00:17:39.235 "reset": true, 00:17:39.235 "nvme_admin": false, 00:17:39.235 "nvme_io": false, 00:17:39.235 "nvme_io_md": false, 00:17:39.235 "write_zeroes": true, 00:17:39.235 "zcopy": true, 00:17:39.235 "get_zone_info": false, 00:17:39.235 "zone_management": false, 00:17:39.235 "zone_append": false, 00:17:39.235 "compare": false, 00:17:39.235 "compare_and_write": false, 00:17:39.235 "abort": true, 00:17:39.235 "seek_hole": false, 00:17:39.235 "seek_data": false, 00:17:39.235 "copy": true, 00:17:39.235 "nvme_iov_md": false 00:17:39.235 }, 00:17:39.235 "memory_domains": [ 00:17:39.235 { 00:17:39.235 "dma_device_id": "system", 00:17:39.235 "dma_device_type": 1 00:17:39.235 }, 00:17:39.235 { 00:17:39.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.235 "dma_device_type": 2 00:17:39.235 } 00:17:39.235 ], 00:17:39.235 "driver_specific": {} 00:17:39.235 } 00:17:39.235 ] 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.235 "name": "Existed_Raid", 00:17:39.235 "uuid": "5c5c925a-79a6-4e9d-9db0-99b2895d1bd7", 00:17:39.235 "strip_size_kb": 64, 00:17:39.235 "state": "configuring", 00:17:39.235 "raid_level": "raid5f", 00:17:39.235 "superblock": true, 00:17:39.235 "num_base_bdevs": 4, 00:17:39.235 "num_base_bdevs_discovered": 2, 00:17:39.235 "num_base_bdevs_operational": 4, 00:17:39.235 "base_bdevs_list": [ 00:17:39.235 { 00:17:39.235 "name": "BaseBdev1", 00:17:39.235 "uuid": "e7560d43-b549-4353-9908-28dd58b58520", 00:17:39.235 "is_configured": true, 00:17:39.235 "data_offset": 2048, 00:17:39.235 "data_size": 63488 00:17:39.235 }, 00:17:39.235 { 00:17:39.235 "name": "BaseBdev2", 00:17:39.235 "uuid": "f8b2ead4-2680-446d-b230-2e2a98152a71", 00:17:39.235 "is_configured": true, 00:17:39.235 "data_offset": 2048, 00:17:39.235 "data_size": 63488 00:17:39.235 }, 00:17:39.235 { 00:17:39.235 "name": "BaseBdev3", 00:17:39.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.235 "is_configured": false, 00:17:39.235 "data_offset": 0, 00:17:39.235 "data_size": 0 00:17:39.235 }, 00:17:39.235 { 00:17:39.235 "name": "BaseBdev4", 00:17:39.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.235 "is_configured": false, 00:17:39.235 "data_offset": 0, 00:17:39.235 "data_size": 0 00:17:39.235 } 00:17:39.235 ] 00:17:39.235 }' 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.235 19:07:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.802 [2024-11-26 19:07:06.303875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:39.802 BaseBdev3 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.802 [ 00:17:39.802 { 00:17:39.802 "name": "BaseBdev3", 00:17:39.802 "aliases": [ 00:17:39.802 "31f3d0e4-0cb9-4342-97d7-32b40c6bbf4f" 00:17:39.802 ], 00:17:39.802 "product_name": "Malloc disk", 00:17:39.802 "block_size": 512, 00:17:39.802 "num_blocks": 65536, 00:17:39.802 "uuid": "31f3d0e4-0cb9-4342-97d7-32b40c6bbf4f", 00:17:39.802 "assigned_rate_limits": { 00:17:39.802 "rw_ios_per_sec": 0, 00:17:39.802 "rw_mbytes_per_sec": 0, 00:17:39.802 "r_mbytes_per_sec": 0, 00:17:39.802 "w_mbytes_per_sec": 0 00:17:39.802 }, 00:17:39.802 "claimed": true, 00:17:39.802 "claim_type": "exclusive_write", 00:17:39.802 "zoned": false, 00:17:39.802 "supported_io_types": { 00:17:39.802 "read": true, 00:17:39.802 "write": true, 00:17:39.802 "unmap": true, 00:17:39.802 "flush": true, 00:17:39.802 "reset": true, 00:17:39.802 "nvme_admin": false, 00:17:39.802 "nvme_io": false, 00:17:39.802 "nvme_io_md": false, 00:17:39.802 "write_zeroes": true, 00:17:39.802 "zcopy": true, 00:17:39.802 "get_zone_info": false, 00:17:39.802 "zone_management": false, 00:17:39.802 "zone_append": false, 00:17:39.802 "compare": false, 00:17:39.802 "compare_and_write": false, 00:17:39.802 "abort": true, 00:17:39.802 "seek_hole": false, 00:17:39.802 "seek_data": false, 00:17:39.802 "copy": true, 00:17:39.802 "nvme_iov_md": false 00:17:39.802 }, 00:17:39.802 "memory_domains": [ 00:17:39.802 { 00:17:39.802 "dma_device_id": "system", 00:17:39.802 "dma_device_type": 1 00:17:39.802 }, 00:17:39.802 { 00:17:39.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.802 "dma_device_type": 2 00:17:39.802 } 00:17:39.802 ], 00:17:39.802 "driver_specific": {} 00:17:39.802 } 00:17:39.802 ] 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.802 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.802 "name": "Existed_Raid", 00:17:39.802 "uuid": "5c5c925a-79a6-4e9d-9db0-99b2895d1bd7", 00:17:39.802 "strip_size_kb": 64, 00:17:39.802 "state": "configuring", 00:17:39.802 "raid_level": "raid5f", 00:17:39.802 "superblock": true, 00:17:39.802 "num_base_bdevs": 4, 00:17:39.802 "num_base_bdevs_discovered": 3, 00:17:39.802 "num_base_bdevs_operational": 4, 00:17:39.803 "base_bdevs_list": [ 00:17:39.803 { 00:17:39.803 "name": "BaseBdev1", 00:17:39.803 "uuid": "e7560d43-b549-4353-9908-28dd58b58520", 00:17:39.803 "is_configured": true, 00:17:39.803 "data_offset": 2048, 00:17:39.803 "data_size": 63488 00:17:39.803 }, 00:17:39.803 { 00:17:39.803 "name": "BaseBdev2", 00:17:39.803 "uuid": "f8b2ead4-2680-446d-b230-2e2a98152a71", 00:17:39.803 "is_configured": true, 00:17:39.803 "data_offset": 2048, 00:17:39.803 "data_size": 63488 00:17:39.803 }, 00:17:39.803 { 00:17:39.803 "name": "BaseBdev3", 00:17:39.803 "uuid": "31f3d0e4-0cb9-4342-97d7-32b40c6bbf4f", 00:17:39.803 "is_configured": true, 00:17:39.803 "data_offset": 2048, 00:17:39.803 "data_size": 63488 00:17:39.803 }, 00:17:39.803 { 00:17:39.803 "name": "BaseBdev4", 00:17:39.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.803 "is_configured": false, 00:17:39.803 "data_offset": 0, 00:17:39.803 "data_size": 0 00:17:39.803 } 00:17:39.803 ] 00:17:39.803 }' 00:17:39.803 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.803 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.371 [2024-11-26 19:07:06.933141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:40.371 [2024-11-26 19:07:06.933886] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:40.371 [2024-11-26 19:07:06.933914] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:40.371 BaseBdev4 00:17:40.371 [2024-11-26 19:07:06.934278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.371 [2024-11-26 19:07:06.941428] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:40.371 [2024-11-26 19:07:06.941605] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:40.371 [2024-11-26 19:07:06.942094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.371 [ 00:17:40.371 { 00:17:40.371 "name": "BaseBdev4", 00:17:40.371 "aliases": [ 00:17:40.371 "668ca543-a21b-41ba-afad-64db254dd8de" 00:17:40.371 ], 00:17:40.371 "product_name": "Malloc disk", 00:17:40.371 "block_size": 512, 00:17:40.371 "num_blocks": 65536, 00:17:40.371 "uuid": "668ca543-a21b-41ba-afad-64db254dd8de", 00:17:40.371 "assigned_rate_limits": { 00:17:40.371 "rw_ios_per_sec": 0, 00:17:40.371 "rw_mbytes_per_sec": 0, 00:17:40.371 "r_mbytes_per_sec": 0, 00:17:40.371 "w_mbytes_per_sec": 0 00:17:40.371 }, 00:17:40.371 "claimed": true, 00:17:40.371 "claim_type": "exclusive_write", 00:17:40.371 "zoned": false, 00:17:40.371 "supported_io_types": { 00:17:40.371 "read": true, 00:17:40.371 "write": true, 00:17:40.371 "unmap": true, 00:17:40.371 "flush": true, 00:17:40.371 "reset": true, 00:17:40.371 "nvme_admin": false, 00:17:40.371 "nvme_io": false, 00:17:40.371 "nvme_io_md": false, 00:17:40.371 "write_zeroes": true, 00:17:40.371 "zcopy": true, 00:17:40.371 "get_zone_info": false, 00:17:40.371 "zone_management": false, 00:17:40.371 "zone_append": false, 00:17:40.371 "compare": false, 00:17:40.371 "compare_and_write": false, 00:17:40.371 "abort": true, 00:17:40.371 "seek_hole": false, 00:17:40.371 "seek_data": false, 00:17:40.371 "copy": true, 00:17:40.371 "nvme_iov_md": false 00:17:40.371 }, 00:17:40.371 "memory_domains": [ 00:17:40.371 { 00:17:40.371 "dma_device_id": "system", 00:17:40.371 "dma_device_type": 1 00:17:40.371 }, 00:17:40.371 { 00:17:40.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.371 "dma_device_type": 2 00:17:40.371 } 00:17:40.371 ], 00:17:40.371 "driver_specific": {} 00:17:40.371 } 00:17:40.371 ] 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.371 19:07:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.630 19:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.630 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.630 "name": "Existed_Raid", 00:17:40.630 "uuid": "5c5c925a-79a6-4e9d-9db0-99b2895d1bd7", 00:17:40.630 "strip_size_kb": 64, 00:17:40.630 "state": "online", 00:17:40.630 "raid_level": "raid5f", 00:17:40.630 "superblock": true, 00:17:40.630 "num_base_bdevs": 4, 00:17:40.630 "num_base_bdevs_discovered": 4, 00:17:40.630 "num_base_bdevs_operational": 4, 00:17:40.630 "base_bdevs_list": [ 00:17:40.630 { 00:17:40.630 "name": "BaseBdev1", 00:17:40.630 "uuid": "e7560d43-b549-4353-9908-28dd58b58520", 00:17:40.630 "is_configured": true, 00:17:40.630 "data_offset": 2048, 00:17:40.630 "data_size": 63488 00:17:40.630 }, 00:17:40.630 { 00:17:40.630 "name": "BaseBdev2", 00:17:40.630 "uuid": "f8b2ead4-2680-446d-b230-2e2a98152a71", 00:17:40.630 "is_configured": true, 00:17:40.630 "data_offset": 2048, 00:17:40.630 "data_size": 63488 00:17:40.630 }, 00:17:40.630 { 00:17:40.630 "name": "BaseBdev3", 00:17:40.630 "uuid": "31f3d0e4-0cb9-4342-97d7-32b40c6bbf4f", 00:17:40.630 "is_configured": true, 00:17:40.630 "data_offset": 2048, 00:17:40.630 "data_size": 63488 00:17:40.630 }, 00:17:40.630 { 00:17:40.630 "name": "BaseBdev4", 00:17:40.630 "uuid": "668ca543-a21b-41ba-afad-64db254dd8de", 00:17:40.630 "is_configured": true, 00:17:40.630 "data_offset": 2048, 00:17:40.630 "data_size": 63488 00:17:40.630 } 00:17:40.630 ] 00:17:40.630 }' 00:17:40.630 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.630 19:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.196 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:41.196 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:41.196 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:41.196 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:41.196 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:41.196 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:41.196 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:41.196 19:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.196 19:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.196 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:41.196 [2024-11-26 19:07:07.543116] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:41.196 19:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.196 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:41.196 "name": "Existed_Raid", 00:17:41.196 "aliases": [ 00:17:41.196 "5c5c925a-79a6-4e9d-9db0-99b2895d1bd7" 00:17:41.196 ], 00:17:41.196 "product_name": "Raid Volume", 00:17:41.196 "block_size": 512, 00:17:41.196 "num_blocks": 190464, 00:17:41.196 "uuid": "5c5c925a-79a6-4e9d-9db0-99b2895d1bd7", 00:17:41.196 "assigned_rate_limits": { 00:17:41.196 "rw_ios_per_sec": 0, 00:17:41.196 "rw_mbytes_per_sec": 0, 00:17:41.196 "r_mbytes_per_sec": 0, 00:17:41.196 "w_mbytes_per_sec": 0 00:17:41.196 }, 00:17:41.196 "claimed": false, 00:17:41.196 "zoned": false, 00:17:41.196 "supported_io_types": { 00:17:41.196 "read": true, 00:17:41.196 "write": true, 00:17:41.196 "unmap": false, 00:17:41.196 "flush": false, 00:17:41.196 "reset": true, 00:17:41.196 "nvme_admin": false, 00:17:41.196 "nvme_io": false, 00:17:41.196 "nvme_io_md": false, 00:17:41.196 "write_zeroes": true, 00:17:41.196 "zcopy": false, 00:17:41.196 "get_zone_info": false, 00:17:41.196 "zone_management": false, 00:17:41.196 "zone_append": false, 00:17:41.196 "compare": false, 00:17:41.196 "compare_and_write": false, 00:17:41.196 "abort": false, 00:17:41.196 "seek_hole": false, 00:17:41.196 "seek_data": false, 00:17:41.196 "copy": false, 00:17:41.196 "nvme_iov_md": false 00:17:41.196 }, 00:17:41.196 "driver_specific": { 00:17:41.196 "raid": { 00:17:41.196 "uuid": "5c5c925a-79a6-4e9d-9db0-99b2895d1bd7", 00:17:41.196 "strip_size_kb": 64, 00:17:41.196 "state": "online", 00:17:41.196 "raid_level": "raid5f", 00:17:41.196 "superblock": true, 00:17:41.196 "num_base_bdevs": 4, 00:17:41.196 "num_base_bdevs_discovered": 4, 00:17:41.196 "num_base_bdevs_operational": 4, 00:17:41.196 "base_bdevs_list": [ 00:17:41.196 { 00:17:41.196 "name": "BaseBdev1", 00:17:41.196 "uuid": "e7560d43-b549-4353-9908-28dd58b58520", 00:17:41.196 "is_configured": true, 00:17:41.196 "data_offset": 2048, 00:17:41.196 "data_size": 63488 00:17:41.196 }, 00:17:41.196 { 00:17:41.196 "name": "BaseBdev2", 00:17:41.196 "uuid": "f8b2ead4-2680-446d-b230-2e2a98152a71", 00:17:41.196 "is_configured": true, 00:17:41.196 "data_offset": 2048, 00:17:41.196 "data_size": 63488 00:17:41.196 }, 00:17:41.196 { 00:17:41.196 "name": "BaseBdev3", 00:17:41.196 "uuid": "31f3d0e4-0cb9-4342-97d7-32b40c6bbf4f", 00:17:41.196 "is_configured": true, 00:17:41.196 "data_offset": 2048, 00:17:41.196 "data_size": 63488 00:17:41.196 }, 00:17:41.196 { 00:17:41.196 "name": "BaseBdev4", 00:17:41.196 "uuid": "668ca543-a21b-41ba-afad-64db254dd8de", 00:17:41.196 "is_configured": true, 00:17:41.196 "data_offset": 2048, 00:17:41.196 "data_size": 63488 00:17:41.196 } 00:17:41.196 ] 00:17:41.196 } 00:17:41.196 } 00:17:41.196 }' 00:17:41.196 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:41.196 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:41.196 BaseBdev2 00:17:41.196 BaseBdev3 00:17:41.196 BaseBdev4' 00:17:41.196 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.197 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:41.197 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:41.197 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:41.197 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.197 19:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.197 19:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.197 19:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.197 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:41.197 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:41.197 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:41.197 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.197 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:41.197 19:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.197 19:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.197 19:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.455 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:41.455 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:41.455 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:41.455 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:41.455 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.455 19:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.455 19:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.455 19:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.455 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:41.455 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:41.455 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:41.455 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:41.455 19:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.455 19:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.455 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.455 19:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.455 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:41.455 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:41.455 19:07:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:41.455 19:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.455 19:07:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.455 [2024-11-26 19:07:07.935091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:41.455 19:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.455 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:41.455 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:41.455 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:41.455 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:41.455 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:41.455 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:41.455 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:41.455 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.455 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.455 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.455 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:41.455 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.455 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.455 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.455 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.455 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.455 19:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.455 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.455 19:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.455 19:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.713 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.713 "name": "Existed_Raid", 00:17:41.713 "uuid": "5c5c925a-79a6-4e9d-9db0-99b2895d1bd7", 00:17:41.713 "strip_size_kb": 64, 00:17:41.713 "state": "online", 00:17:41.713 "raid_level": "raid5f", 00:17:41.713 "superblock": true, 00:17:41.713 "num_base_bdevs": 4, 00:17:41.713 "num_base_bdevs_discovered": 3, 00:17:41.713 "num_base_bdevs_operational": 3, 00:17:41.713 "base_bdevs_list": [ 00:17:41.713 { 00:17:41.713 "name": null, 00:17:41.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.713 "is_configured": false, 00:17:41.713 "data_offset": 0, 00:17:41.713 "data_size": 63488 00:17:41.713 }, 00:17:41.713 { 00:17:41.713 "name": "BaseBdev2", 00:17:41.713 "uuid": "f8b2ead4-2680-446d-b230-2e2a98152a71", 00:17:41.713 "is_configured": true, 00:17:41.713 "data_offset": 2048, 00:17:41.713 "data_size": 63488 00:17:41.713 }, 00:17:41.713 { 00:17:41.713 "name": "BaseBdev3", 00:17:41.713 "uuid": "31f3d0e4-0cb9-4342-97d7-32b40c6bbf4f", 00:17:41.713 "is_configured": true, 00:17:41.713 "data_offset": 2048, 00:17:41.713 "data_size": 63488 00:17:41.713 }, 00:17:41.713 { 00:17:41.713 "name": "BaseBdev4", 00:17:41.713 "uuid": "668ca543-a21b-41ba-afad-64db254dd8de", 00:17:41.713 "is_configured": true, 00:17:41.713 "data_offset": 2048, 00:17:41.713 "data_size": 63488 00:17:41.713 } 00:17:41.713 ] 00:17:41.713 }' 00:17:41.713 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.713 19:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.971 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:41.972 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:41.972 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.972 19:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.972 19:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.972 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:41.972 19:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.972 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:41.972 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:41.972 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:41.972 19:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.972 19:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.230 [2024-11-26 19:07:08.596759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:42.230 [2024-11-26 19:07:08.597037] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:42.230 [2024-11-26 19:07:08.691450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.230 19:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.230 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:42.230 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:42.230 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.230 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:42.230 19:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.230 19:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.230 19:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.230 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:42.231 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:42.231 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:42.231 19:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.231 19:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.231 [2024-11-26 19:07:08.771496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:42.489 19:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.489 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:42.489 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:42.489 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.489 19:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.489 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:42.489 19:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.489 19:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.489 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:42.489 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:42.489 19:07:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:42.489 19:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.489 19:07:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.489 [2024-11-26 19:07:08.926095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:42.489 [2024-11-26 19:07:08.926374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:42.489 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.489 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:42.489 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:42.489 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:42.489 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.489 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.489 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.489 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.489 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:42.489 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:42.489 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:42.489 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:42.489 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:42.489 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:42.489 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.489 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.752 BaseBdev2 00:17:42.752 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.752 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:42.752 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:42.752 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:42.752 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:42.752 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:42.752 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:42.752 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:42.752 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.752 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.752 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.752 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:42.752 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.752 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.752 [ 00:17:42.752 { 00:17:42.752 "name": "BaseBdev2", 00:17:42.752 "aliases": [ 00:17:42.752 "5bd4a85e-0e1a-4ba2-9800-998e9df18409" 00:17:42.752 ], 00:17:42.752 "product_name": "Malloc disk", 00:17:42.752 "block_size": 512, 00:17:42.752 "num_blocks": 65536, 00:17:42.752 "uuid": "5bd4a85e-0e1a-4ba2-9800-998e9df18409", 00:17:42.752 "assigned_rate_limits": { 00:17:42.752 "rw_ios_per_sec": 0, 00:17:42.752 "rw_mbytes_per_sec": 0, 00:17:42.752 "r_mbytes_per_sec": 0, 00:17:42.752 "w_mbytes_per_sec": 0 00:17:42.752 }, 00:17:42.752 "claimed": false, 00:17:42.752 "zoned": false, 00:17:42.752 "supported_io_types": { 00:17:42.752 "read": true, 00:17:42.752 "write": true, 00:17:42.752 "unmap": true, 00:17:42.752 "flush": true, 00:17:42.752 "reset": true, 00:17:42.752 "nvme_admin": false, 00:17:42.752 "nvme_io": false, 00:17:42.752 "nvme_io_md": false, 00:17:42.752 "write_zeroes": true, 00:17:42.752 "zcopy": true, 00:17:42.752 "get_zone_info": false, 00:17:42.752 "zone_management": false, 00:17:42.752 "zone_append": false, 00:17:42.752 "compare": false, 00:17:42.752 "compare_and_write": false, 00:17:42.752 "abort": true, 00:17:42.752 "seek_hole": false, 00:17:42.752 "seek_data": false, 00:17:42.752 "copy": true, 00:17:42.752 "nvme_iov_md": false 00:17:42.752 }, 00:17:42.752 "memory_domains": [ 00:17:42.752 { 00:17:42.752 "dma_device_id": "system", 00:17:42.752 "dma_device_type": 1 00:17:42.752 }, 00:17:42.752 { 00:17:42.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.752 "dma_device_type": 2 00:17:42.752 } 00:17:42.752 ], 00:17:42.752 "driver_specific": {} 00:17:42.752 } 00:17:42.752 ] 00:17:42.752 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.752 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:42.752 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.753 BaseBdev3 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.753 [ 00:17:42.753 { 00:17:42.753 "name": "BaseBdev3", 00:17:42.753 "aliases": [ 00:17:42.753 "c23439ec-b770-4f76-8796-44ebfdcf15d7" 00:17:42.753 ], 00:17:42.753 "product_name": "Malloc disk", 00:17:42.753 "block_size": 512, 00:17:42.753 "num_blocks": 65536, 00:17:42.753 "uuid": "c23439ec-b770-4f76-8796-44ebfdcf15d7", 00:17:42.753 "assigned_rate_limits": { 00:17:42.753 "rw_ios_per_sec": 0, 00:17:42.753 "rw_mbytes_per_sec": 0, 00:17:42.753 "r_mbytes_per_sec": 0, 00:17:42.753 "w_mbytes_per_sec": 0 00:17:42.753 }, 00:17:42.753 "claimed": false, 00:17:42.753 "zoned": false, 00:17:42.753 "supported_io_types": { 00:17:42.753 "read": true, 00:17:42.753 "write": true, 00:17:42.753 "unmap": true, 00:17:42.753 "flush": true, 00:17:42.753 "reset": true, 00:17:42.753 "nvme_admin": false, 00:17:42.753 "nvme_io": false, 00:17:42.753 "nvme_io_md": false, 00:17:42.753 "write_zeroes": true, 00:17:42.753 "zcopy": true, 00:17:42.753 "get_zone_info": false, 00:17:42.753 "zone_management": false, 00:17:42.753 "zone_append": false, 00:17:42.753 "compare": false, 00:17:42.753 "compare_and_write": false, 00:17:42.753 "abort": true, 00:17:42.753 "seek_hole": false, 00:17:42.753 "seek_data": false, 00:17:42.753 "copy": true, 00:17:42.753 "nvme_iov_md": false 00:17:42.753 }, 00:17:42.753 "memory_domains": [ 00:17:42.753 { 00:17:42.753 "dma_device_id": "system", 00:17:42.753 "dma_device_type": 1 00:17:42.753 }, 00:17:42.753 { 00:17:42.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.753 "dma_device_type": 2 00:17:42.753 } 00:17:42.753 ], 00:17:42.753 "driver_specific": {} 00:17:42.753 } 00:17:42.753 ] 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.753 BaseBdev4 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:42.753 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:42.754 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:42.754 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:42.754 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:42.754 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:42.754 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.754 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.754 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.754 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:42.754 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.754 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.754 [ 00:17:42.754 { 00:17:42.754 "name": "BaseBdev4", 00:17:42.754 "aliases": [ 00:17:42.754 "5c238b83-4fd5-471c-b00e-4616757fbe7b" 00:17:42.754 ], 00:17:42.754 "product_name": "Malloc disk", 00:17:42.754 "block_size": 512, 00:17:42.754 "num_blocks": 65536, 00:17:42.754 "uuid": "5c238b83-4fd5-471c-b00e-4616757fbe7b", 00:17:42.754 "assigned_rate_limits": { 00:17:42.754 "rw_ios_per_sec": 0, 00:17:42.754 "rw_mbytes_per_sec": 0, 00:17:42.754 "r_mbytes_per_sec": 0, 00:17:42.754 "w_mbytes_per_sec": 0 00:17:42.754 }, 00:17:42.754 "claimed": false, 00:17:42.754 "zoned": false, 00:17:42.754 "supported_io_types": { 00:17:42.754 "read": true, 00:17:42.754 "write": true, 00:17:42.754 "unmap": true, 00:17:42.754 "flush": true, 00:17:42.754 "reset": true, 00:17:42.754 "nvme_admin": false, 00:17:42.754 "nvme_io": false, 00:17:42.754 "nvme_io_md": false, 00:17:42.754 "write_zeroes": true, 00:17:42.754 "zcopy": true, 00:17:42.754 "get_zone_info": false, 00:17:42.754 "zone_management": false, 00:17:42.754 "zone_append": false, 00:17:42.754 "compare": false, 00:17:42.754 "compare_and_write": false, 00:17:42.754 "abort": true, 00:17:42.754 "seek_hole": false, 00:17:42.754 "seek_data": false, 00:17:42.754 "copy": true, 00:17:42.754 "nvme_iov_md": false 00:17:42.754 }, 00:17:42.754 "memory_domains": [ 00:17:42.754 { 00:17:42.754 "dma_device_id": "system", 00:17:42.754 "dma_device_type": 1 00:17:42.754 }, 00:17:42.754 { 00:17:42.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.754 "dma_device_type": 2 00:17:42.754 } 00:17:42.754 ], 00:17:42.754 "driver_specific": {} 00:17:42.754 } 00:17:42.754 ] 00:17:42.754 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.754 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:42.754 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:42.754 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:42.754 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:42.754 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.755 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.755 [2024-11-26 19:07:09.328635] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:42.755 [2024-11-26 19:07:09.328873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:42.755 [2024-11-26 19:07:09.329014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:42.755 [2024-11-26 19:07:09.331704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:42.755 [2024-11-26 19:07:09.331937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:42.755 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.755 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:42.755 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.755 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:42.755 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.755 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.755 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:42.755 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.755 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.755 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.755 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.755 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.755 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.755 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.755 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.755 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.064 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.064 "name": "Existed_Raid", 00:17:43.064 "uuid": "868ef29e-c26d-49d2-8644-4a0620e5af32", 00:17:43.064 "strip_size_kb": 64, 00:17:43.064 "state": "configuring", 00:17:43.064 "raid_level": "raid5f", 00:17:43.064 "superblock": true, 00:17:43.064 "num_base_bdevs": 4, 00:17:43.064 "num_base_bdevs_discovered": 3, 00:17:43.064 "num_base_bdevs_operational": 4, 00:17:43.064 "base_bdevs_list": [ 00:17:43.064 { 00:17:43.064 "name": "BaseBdev1", 00:17:43.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.064 "is_configured": false, 00:17:43.064 "data_offset": 0, 00:17:43.064 "data_size": 0 00:17:43.064 }, 00:17:43.064 { 00:17:43.064 "name": "BaseBdev2", 00:17:43.064 "uuid": "5bd4a85e-0e1a-4ba2-9800-998e9df18409", 00:17:43.064 "is_configured": true, 00:17:43.064 "data_offset": 2048, 00:17:43.064 "data_size": 63488 00:17:43.064 }, 00:17:43.064 { 00:17:43.064 "name": "BaseBdev3", 00:17:43.064 "uuid": "c23439ec-b770-4f76-8796-44ebfdcf15d7", 00:17:43.064 "is_configured": true, 00:17:43.064 "data_offset": 2048, 00:17:43.064 "data_size": 63488 00:17:43.064 }, 00:17:43.064 { 00:17:43.064 "name": "BaseBdev4", 00:17:43.064 "uuid": "5c238b83-4fd5-471c-b00e-4616757fbe7b", 00:17:43.064 "is_configured": true, 00:17:43.064 "data_offset": 2048, 00:17:43.064 "data_size": 63488 00:17:43.064 } 00:17:43.064 ] 00:17:43.064 }' 00:17:43.064 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.064 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.322 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:43.322 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.322 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.322 [2024-11-26 19:07:09.892946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:43.322 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.322 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:43.322 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.322 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.322 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:43.322 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.322 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:43.322 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.322 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.322 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.322 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.322 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.322 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.322 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.322 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.322 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.581 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.581 "name": "Existed_Raid", 00:17:43.581 "uuid": "868ef29e-c26d-49d2-8644-4a0620e5af32", 00:17:43.581 "strip_size_kb": 64, 00:17:43.581 "state": "configuring", 00:17:43.581 "raid_level": "raid5f", 00:17:43.581 "superblock": true, 00:17:43.581 "num_base_bdevs": 4, 00:17:43.581 "num_base_bdevs_discovered": 2, 00:17:43.581 "num_base_bdevs_operational": 4, 00:17:43.581 "base_bdevs_list": [ 00:17:43.581 { 00:17:43.581 "name": "BaseBdev1", 00:17:43.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.581 "is_configured": false, 00:17:43.581 "data_offset": 0, 00:17:43.581 "data_size": 0 00:17:43.581 }, 00:17:43.581 { 00:17:43.581 "name": null, 00:17:43.581 "uuid": "5bd4a85e-0e1a-4ba2-9800-998e9df18409", 00:17:43.581 "is_configured": false, 00:17:43.581 "data_offset": 0, 00:17:43.581 "data_size": 63488 00:17:43.581 }, 00:17:43.581 { 00:17:43.581 "name": "BaseBdev3", 00:17:43.581 "uuid": "c23439ec-b770-4f76-8796-44ebfdcf15d7", 00:17:43.581 "is_configured": true, 00:17:43.581 "data_offset": 2048, 00:17:43.581 "data_size": 63488 00:17:43.581 }, 00:17:43.581 { 00:17:43.581 "name": "BaseBdev4", 00:17:43.581 "uuid": "5c238b83-4fd5-471c-b00e-4616757fbe7b", 00:17:43.581 "is_configured": true, 00:17:43.581 "data_offset": 2048, 00:17:43.581 "data_size": 63488 00:17:43.581 } 00:17:43.581 ] 00:17:43.581 }' 00:17:43.581 19:07:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.581 19:07:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.838 19:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.838 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.838 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.838 19:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:43.838 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.096 [2024-11-26 19:07:10.554197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.096 BaseBdev1 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.096 [ 00:17:44.096 { 00:17:44.096 "name": "BaseBdev1", 00:17:44.096 "aliases": [ 00:17:44.096 "a9eba065-89fe-4ef8-baae-62dafa49a92f" 00:17:44.096 ], 00:17:44.096 "product_name": "Malloc disk", 00:17:44.096 "block_size": 512, 00:17:44.096 "num_blocks": 65536, 00:17:44.096 "uuid": "a9eba065-89fe-4ef8-baae-62dafa49a92f", 00:17:44.096 "assigned_rate_limits": { 00:17:44.096 "rw_ios_per_sec": 0, 00:17:44.096 "rw_mbytes_per_sec": 0, 00:17:44.096 "r_mbytes_per_sec": 0, 00:17:44.096 "w_mbytes_per_sec": 0 00:17:44.096 }, 00:17:44.096 "claimed": true, 00:17:44.096 "claim_type": "exclusive_write", 00:17:44.096 "zoned": false, 00:17:44.096 "supported_io_types": { 00:17:44.096 "read": true, 00:17:44.096 "write": true, 00:17:44.096 "unmap": true, 00:17:44.096 "flush": true, 00:17:44.096 "reset": true, 00:17:44.096 "nvme_admin": false, 00:17:44.096 "nvme_io": false, 00:17:44.096 "nvme_io_md": false, 00:17:44.096 "write_zeroes": true, 00:17:44.096 "zcopy": true, 00:17:44.096 "get_zone_info": false, 00:17:44.096 "zone_management": false, 00:17:44.096 "zone_append": false, 00:17:44.096 "compare": false, 00:17:44.096 "compare_and_write": false, 00:17:44.096 "abort": true, 00:17:44.096 "seek_hole": false, 00:17:44.096 "seek_data": false, 00:17:44.096 "copy": true, 00:17:44.096 "nvme_iov_md": false 00:17:44.096 }, 00:17:44.096 "memory_domains": [ 00:17:44.096 { 00:17:44.096 "dma_device_id": "system", 00:17:44.096 "dma_device_type": 1 00:17:44.096 }, 00:17:44.096 { 00:17:44.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.096 "dma_device_type": 2 00:17:44.096 } 00:17:44.096 ], 00:17:44.096 "driver_specific": {} 00:17:44.096 } 00:17:44.096 ] 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.096 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.097 19:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.097 "name": "Existed_Raid", 00:17:44.097 "uuid": "868ef29e-c26d-49d2-8644-4a0620e5af32", 00:17:44.097 "strip_size_kb": 64, 00:17:44.097 "state": "configuring", 00:17:44.097 "raid_level": "raid5f", 00:17:44.097 "superblock": true, 00:17:44.097 "num_base_bdevs": 4, 00:17:44.097 "num_base_bdevs_discovered": 3, 00:17:44.097 "num_base_bdevs_operational": 4, 00:17:44.097 "base_bdevs_list": [ 00:17:44.097 { 00:17:44.097 "name": "BaseBdev1", 00:17:44.097 "uuid": "a9eba065-89fe-4ef8-baae-62dafa49a92f", 00:17:44.097 "is_configured": true, 00:17:44.097 "data_offset": 2048, 00:17:44.097 "data_size": 63488 00:17:44.097 }, 00:17:44.097 { 00:17:44.097 "name": null, 00:17:44.097 "uuid": "5bd4a85e-0e1a-4ba2-9800-998e9df18409", 00:17:44.097 "is_configured": false, 00:17:44.097 "data_offset": 0, 00:17:44.097 "data_size": 63488 00:17:44.097 }, 00:17:44.097 { 00:17:44.097 "name": "BaseBdev3", 00:17:44.097 "uuid": "c23439ec-b770-4f76-8796-44ebfdcf15d7", 00:17:44.097 "is_configured": true, 00:17:44.097 "data_offset": 2048, 00:17:44.097 "data_size": 63488 00:17:44.097 }, 00:17:44.097 { 00:17:44.097 "name": "BaseBdev4", 00:17:44.097 "uuid": "5c238b83-4fd5-471c-b00e-4616757fbe7b", 00:17:44.097 "is_configured": true, 00:17:44.097 "data_offset": 2048, 00:17:44.097 "data_size": 63488 00:17:44.097 } 00:17:44.097 ] 00:17:44.097 }' 00:17:44.097 19:07:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.097 19:07:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.663 [2024-11-26 19:07:11.230568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.663 19:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.921 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.921 "name": "Existed_Raid", 00:17:44.921 "uuid": "868ef29e-c26d-49d2-8644-4a0620e5af32", 00:17:44.921 "strip_size_kb": 64, 00:17:44.921 "state": "configuring", 00:17:44.921 "raid_level": "raid5f", 00:17:44.921 "superblock": true, 00:17:44.921 "num_base_bdevs": 4, 00:17:44.921 "num_base_bdevs_discovered": 2, 00:17:44.921 "num_base_bdevs_operational": 4, 00:17:44.921 "base_bdevs_list": [ 00:17:44.921 { 00:17:44.921 "name": "BaseBdev1", 00:17:44.921 "uuid": "a9eba065-89fe-4ef8-baae-62dafa49a92f", 00:17:44.921 "is_configured": true, 00:17:44.921 "data_offset": 2048, 00:17:44.921 "data_size": 63488 00:17:44.921 }, 00:17:44.921 { 00:17:44.921 "name": null, 00:17:44.921 "uuid": "5bd4a85e-0e1a-4ba2-9800-998e9df18409", 00:17:44.921 "is_configured": false, 00:17:44.921 "data_offset": 0, 00:17:44.921 "data_size": 63488 00:17:44.921 }, 00:17:44.921 { 00:17:44.921 "name": null, 00:17:44.921 "uuid": "c23439ec-b770-4f76-8796-44ebfdcf15d7", 00:17:44.921 "is_configured": false, 00:17:44.921 "data_offset": 0, 00:17:44.921 "data_size": 63488 00:17:44.921 }, 00:17:44.921 { 00:17:44.921 "name": "BaseBdev4", 00:17:44.921 "uuid": "5c238b83-4fd5-471c-b00e-4616757fbe7b", 00:17:44.921 "is_configured": true, 00:17:44.921 "data_offset": 2048, 00:17:44.921 "data_size": 63488 00:17:44.921 } 00:17:44.921 ] 00:17:44.921 }' 00:17:44.921 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.921 19:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.179 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.179 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:45.179 19:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.179 19:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.179 19:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.436 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:45.436 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:45.436 19:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.436 19:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.436 [2024-11-26 19:07:11.830756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:45.436 19:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.436 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:45.436 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.436 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.436 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.436 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.436 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:45.436 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.436 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.436 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.436 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.436 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.437 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.437 19:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.437 19:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.437 19:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.437 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.437 "name": "Existed_Raid", 00:17:45.437 "uuid": "868ef29e-c26d-49d2-8644-4a0620e5af32", 00:17:45.437 "strip_size_kb": 64, 00:17:45.437 "state": "configuring", 00:17:45.437 "raid_level": "raid5f", 00:17:45.437 "superblock": true, 00:17:45.437 "num_base_bdevs": 4, 00:17:45.437 "num_base_bdevs_discovered": 3, 00:17:45.437 "num_base_bdevs_operational": 4, 00:17:45.437 "base_bdevs_list": [ 00:17:45.437 { 00:17:45.437 "name": "BaseBdev1", 00:17:45.437 "uuid": "a9eba065-89fe-4ef8-baae-62dafa49a92f", 00:17:45.437 "is_configured": true, 00:17:45.437 "data_offset": 2048, 00:17:45.437 "data_size": 63488 00:17:45.437 }, 00:17:45.437 { 00:17:45.437 "name": null, 00:17:45.437 "uuid": "5bd4a85e-0e1a-4ba2-9800-998e9df18409", 00:17:45.437 "is_configured": false, 00:17:45.437 "data_offset": 0, 00:17:45.437 "data_size": 63488 00:17:45.437 }, 00:17:45.437 { 00:17:45.437 "name": "BaseBdev3", 00:17:45.437 "uuid": "c23439ec-b770-4f76-8796-44ebfdcf15d7", 00:17:45.437 "is_configured": true, 00:17:45.437 "data_offset": 2048, 00:17:45.437 "data_size": 63488 00:17:45.437 }, 00:17:45.437 { 00:17:45.437 "name": "BaseBdev4", 00:17:45.437 "uuid": "5c238b83-4fd5-471c-b00e-4616757fbe7b", 00:17:45.437 "is_configured": true, 00:17:45.437 "data_offset": 2048, 00:17:45.437 "data_size": 63488 00:17:45.437 } 00:17:45.437 ] 00:17:45.437 }' 00:17:45.437 19:07:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.437 19:07:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.002 19:07:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.002 19:07:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.002 19:07:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.002 19:07:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:46.002 19:07:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.002 19:07:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:46.002 19:07:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:46.002 19:07:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.002 19:07:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.002 [2024-11-26 19:07:12.410944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:46.002 19:07:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.002 19:07:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:46.002 19:07:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.002 19:07:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:46.002 19:07:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.002 19:07:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.002 19:07:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:46.002 19:07:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.003 19:07:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.003 19:07:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.003 19:07:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.003 19:07:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.003 19:07:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.003 19:07:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.003 19:07:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.003 19:07:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.003 19:07:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.003 "name": "Existed_Raid", 00:17:46.003 "uuid": "868ef29e-c26d-49d2-8644-4a0620e5af32", 00:17:46.003 "strip_size_kb": 64, 00:17:46.003 "state": "configuring", 00:17:46.003 "raid_level": "raid5f", 00:17:46.003 "superblock": true, 00:17:46.003 "num_base_bdevs": 4, 00:17:46.003 "num_base_bdevs_discovered": 2, 00:17:46.003 "num_base_bdevs_operational": 4, 00:17:46.003 "base_bdevs_list": [ 00:17:46.003 { 00:17:46.003 "name": null, 00:17:46.003 "uuid": "a9eba065-89fe-4ef8-baae-62dafa49a92f", 00:17:46.003 "is_configured": false, 00:17:46.003 "data_offset": 0, 00:17:46.003 "data_size": 63488 00:17:46.003 }, 00:17:46.003 { 00:17:46.003 "name": null, 00:17:46.003 "uuid": "5bd4a85e-0e1a-4ba2-9800-998e9df18409", 00:17:46.003 "is_configured": false, 00:17:46.003 "data_offset": 0, 00:17:46.003 "data_size": 63488 00:17:46.003 }, 00:17:46.003 { 00:17:46.003 "name": "BaseBdev3", 00:17:46.003 "uuid": "c23439ec-b770-4f76-8796-44ebfdcf15d7", 00:17:46.003 "is_configured": true, 00:17:46.003 "data_offset": 2048, 00:17:46.003 "data_size": 63488 00:17:46.003 }, 00:17:46.003 { 00:17:46.003 "name": "BaseBdev4", 00:17:46.003 "uuid": "5c238b83-4fd5-471c-b00e-4616757fbe7b", 00:17:46.003 "is_configured": true, 00:17:46.003 "data_offset": 2048, 00:17:46.003 "data_size": 63488 00:17:46.003 } 00:17:46.003 ] 00:17:46.003 }' 00:17:46.003 19:07:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.003 19:07:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.568 [2024-11-26 19:07:13.112186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.568 "name": "Existed_Raid", 00:17:46.568 "uuid": "868ef29e-c26d-49d2-8644-4a0620e5af32", 00:17:46.568 "strip_size_kb": 64, 00:17:46.568 "state": "configuring", 00:17:46.568 "raid_level": "raid5f", 00:17:46.568 "superblock": true, 00:17:46.568 "num_base_bdevs": 4, 00:17:46.568 "num_base_bdevs_discovered": 3, 00:17:46.568 "num_base_bdevs_operational": 4, 00:17:46.568 "base_bdevs_list": [ 00:17:46.568 { 00:17:46.568 "name": null, 00:17:46.568 "uuid": "a9eba065-89fe-4ef8-baae-62dafa49a92f", 00:17:46.568 "is_configured": false, 00:17:46.568 "data_offset": 0, 00:17:46.568 "data_size": 63488 00:17:46.568 }, 00:17:46.568 { 00:17:46.568 "name": "BaseBdev2", 00:17:46.568 "uuid": "5bd4a85e-0e1a-4ba2-9800-998e9df18409", 00:17:46.568 "is_configured": true, 00:17:46.568 "data_offset": 2048, 00:17:46.568 "data_size": 63488 00:17:46.568 }, 00:17:46.568 { 00:17:46.568 "name": "BaseBdev3", 00:17:46.568 "uuid": "c23439ec-b770-4f76-8796-44ebfdcf15d7", 00:17:46.568 "is_configured": true, 00:17:46.568 "data_offset": 2048, 00:17:46.568 "data_size": 63488 00:17:46.568 }, 00:17:46.568 { 00:17:46.568 "name": "BaseBdev4", 00:17:46.568 "uuid": "5c238b83-4fd5-471c-b00e-4616757fbe7b", 00:17:46.568 "is_configured": true, 00:17:46.568 "data_offset": 2048, 00:17:46.568 "data_size": 63488 00:17:46.568 } 00:17:46.568 ] 00:17:46.568 }' 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.568 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.135 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:47.135 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.135 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.135 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.135 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.135 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:47.135 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.135 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.135 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.135 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:47.135 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.135 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a9eba065-89fe-4ef8-baae-62dafa49a92f 00:17:47.135 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.135 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.394 [2024-11-26 19:07:13.794161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:47.394 [2024-11-26 19:07:13.794540] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:47.394 [2024-11-26 19:07:13.794559] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:47.394 NewBaseBdev 00:17:47.394 [2024-11-26 19:07:13.794893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.394 [2024-11-26 19:07:13.801408] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:47.394 [2024-11-26 19:07:13.801574] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:47.394 [2024-11-26 19:07:13.801919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.394 [ 00:17:47.394 { 00:17:47.394 "name": "NewBaseBdev", 00:17:47.394 "aliases": [ 00:17:47.394 "a9eba065-89fe-4ef8-baae-62dafa49a92f" 00:17:47.394 ], 00:17:47.394 "product_name": "Malloc disk", 00:17:47.394 "block_size": 512, 00:17:47.394 "num_blocks": 65536, 00:17:47.394 "uuid": "a9eba065-89fe-4ef8-baae-62dafa49a92f", 00:17:47.394 "assigned_rate_limits": { 00:17:47.394 "rw_ios_per_sec": 0, 00:17:47.394 "rw_mbytes_per_sec": 0, 00:17:47.394 "r_mbytes_per_sec": 0, 00:17:47.394 "w_mbytes_per_sec": 0 00:17:47.394 }, 00:17:47.394 "claimed": true, 00:17:47.394 "claim_type": "exclusive_write", 00:17:47.394 "zoned": false, 00:17:47.394 "supported_io_types": { 00:17:47.394 "read": true, 00:17:47.394 "write": true, 00:17:47.394 "unmap": true, 00:17:47.394 "flush": true, 00:17:47.394 "reset": true, 00:17:47.394 "nvme_admin": false, 00:17:47.394 "nvme_io": false, 00:17:47.394 "nvme_io_md": false, 00:17:47.394 "write_zeroes": true, 00:17:47.394 "zcopy": true, 00:17:47.394 "get_zone_info": false, 00:17:47.394 "zone_management": false, 00:17:47.394 "zone_append": false, 00:17:47.394 "compare": false, 00:17:47.394 "compare_and_write": false, 00:17:47.394 "abort": true, 00:17:47.394 "seek_hole": false, 00:17:47.394 "seek_data": false, 00:17:47.394 "copy": true, 00:17:47.394 "nvme_iov_md": false 00:17:47.394 }, 00:17:47.394 "memory_domains": [ 00:17:47.394 { 00:17:47.394 "dma_device_id": "system", 00:17:47.394 "dma_device_type": 1 00:17:47.394 }, 00:17:47.394 { 00:17:47.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.394 "dma_device_type": 2 00:17:47.394 } 00:17:47.394 ], 00:17:47.394 "driver_specific": {} 00:17:47.394 } 00:17:47.394 ] 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.394 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.394 "name": "Existed_Raid", 00:17:47.394 "uuid": "868ef29e-c26d-49d2-8644-4a0620e5af32", 00:17:47.394 "strip_size_kb": 64, 00:17:47.394 "state": "online", 00:17:47.394 "raid_level": "raid5f", 00:17:47.394 "superblock": true, 00:17:47.394 "num_base_bdevs": 4, 00:17:47.394 "num_base_bdevs_discovered": 4, 00:17:47.394 "num_base_bdevs_operational": 4, 00:17:47.394 "base_bdevs_list": [ 00:17:47.394 { 00:17:47.394 "name": "NewBaseBdev", 00:17:47.394 "uuid": "a9eba065-89fe-4ef8-baae-62dafa49a92f", 00:17:47.394 "is_configured": true, 00:17:47.394 "data_offset": 2048, 00:17:47.394 "data_size": 63488 00:17:47.394 }, 00:17:47.394 { 00:17:47.394 "name": "BaseBdev2", 00:17:47.394 "uuid": "5bd4a85e-0e1a-4ba2-9800-998e9df18409", 00:17:47.394 "is_configured": true, 00:17:47.394 "data_offset": 2048, 00:17:47.394 "data_size": 63488 00:17:47.394 }, 00:17:47.394 { 00:17:47.394 "name": "BaseBdev3", 00:17:47.394 "uuid": "c23439ec-b770-4f76-8796-44ebfdcf15d7", 00:17:47.394 "is_configured": true, 00:17:47.395 "data_offset": 2048, 00:17:47.395 "data_size": 63488 00:17:47.395 }, 00:17:47.395 { 00:17:47.395 "name": "BaseBdev4", 00:17:47.395 "uuid": "5c238b83-4fd5-471c-b00e-4616757fbe7b", 00:17:47.395 "is_configured": true, 00:17:47.395 "data_offset": 2048, 00:17:47.395 "data_size": 63488 00:17:47.395 } 00:17:47.395 ] 00:17:47.395 }' 00:17:47.395 19:07:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.395 19:07:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.961 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:47.961 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:47.961 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:47.961 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:47.961 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:47.961 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:47.961 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:47.961 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:47.961 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.961 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.961 [2024-11-26 19:07:14.370328] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:47.961 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.961 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:47.961 "name": "Existed_Raid", 00:17:47.961 "aliases": [ 00:17:47.961 "868ef29e-c26d-49d2-8644-4a0620e5af32" 00:17:47.961 ], 00:17:47.962 "product_name": "Raid Volume", 00:17:47.962 "block_size": 512, 00:17:47.962 "num_blocks": 190464, 00:17:47.962 "uuid": "868ef29e-c26d-49d2-8644-4a0620e5af32", 00:17:47.962 "assigned_rate_limits": { 00:17:47.962 "rw_ios_per_sec": 0, 00:17:47.962 "rw_mbytes_per_sec": 0, 00:17:47.962 "r_mbytes_per_sec": 0, 00:17:47.962 "w_mbytes_per_sec": 0 00:17:47.962 }, 00:17:47.962 "claimed": false, 00:17:47.962 "zoned": false, 00:17:47.962 "supported_io_types": { 00:17:47.962 "read": true, 00:17:47.962 "write": true, 00:17:47.962 "unmap": false, 00:17:47.962 "flush": false, 00:17:47.962 "reset": true, 00:17:47.962 "nvme_admin": false, 00:17:47.962 "nvme_io": false, 00:17:47.962 "nvme_io_md": false, 00:17:47.962 "write_zeroes": true, 00:17:47.962 "zcopy": false, 00:17:47.962 "get_zone_info": false, 00:17:47.962 "zone_management": false, 00:17:47.962 "zone_append": false, 00:17:47.962 "compare": false, 00:17:47.962 "compare_and_write": false, 00:17:47.962 "abort": false, 00:17:47.962 "seek_hole": false, 00:17:47.962 "seek_data": false, 00:17:47.962 "copy": false, 00:17:47.962 "nvme_iov_md": false 00:17:47.962 }, 00:17:47.962 "driver_specific": { 00:17:47.962 "raid": { 00:17:47.962 "uuid": "868ef29e-c26d-49d2-8644-4a0620e5af32", 00:17:47.962 "strip_size_kb": 64, 00:17:47.962 "state": "online", 00:17:47.962 "raid_level": "raid5f", 00:17:47.962 "superblock": true, 00:17:47.962 "num_base_bdevs": 4, 00:17:47.962 "num_base_bdevs_discovered": 4, 00:17:47.962 "num_base_bdevs_operational": 4, 00:17:47.962 "base_bdevs_list": [ 00:17:47.962 { 00:17:47.962 "name": "NewBaseBdev", 00:17:47.962 "uuid": "a9eba065-89fe-4ef8-baae-62dafa49a92f", 00:17:47.962 "is_configured": true, 00:17:47.962 "data_offset": 2048, 00:17:47.962 "data_size": 63488 00:17:47.962 }, 00:17:47.962 { 00:17:47.962 "name": "BaseBdev2", 00:17:47.962 "uuid": "5bd4a85e-0e1a-4ba2-9800-998e9df18409", 00:17:47.962 "is_configured": true, 00:17:47.962 "data_offset": 2048, 00:17:47.962 "data_size": 63488 00:17:47.962 }, 00:17:47.962 { 00:17:47.962 "name": "BaseBdev3", 00:17:47.962 "uuid": "c23439ec-b770-4f76-8796-44ebfdcf15d7", 00:17:47.962 "is_configured": true, 00:17:47.962 "data_offset": 2048, 00:17:47.962 "data_size": 63488 00:17:47.962 }, 00:17:47.962 { 00:17:47.962 "name": "BaseBdev4", 00:17:47.962 "uuid": "5c238b83-4fd5-471c-b00e-4616757fbe7b", 00:17:47.962 "is_configured": true, 00:17:47.962 "data_offset": 2048, 00:17:47.962 "data_size": 63488 00:17:47.962 } 00:17:47.962 ] 00:17:47.962 } 00:17:47.962 } 00:17:47.962 }' 00:17:47.962 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:47.962 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:47.962 BaseBdev2 00:17:47.962 BaseBdev3 00:17:47.962 BaseBdev4' 00:17:47.962 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.962 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:47.962 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.962 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:47.962 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.962 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.962 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.962 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.962 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:47.962 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:47.962 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.962 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:47.962 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.962 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.962 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.220 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.220 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.220 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.220 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.220 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:48.220 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.220 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.220 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.220 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.220 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.220 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.220 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.220 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:48.220 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.220 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.220 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.220 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.220 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.220 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.220 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:48.220 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.221 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.221 [2024-11-26 19:07:14.770084] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:48.221 [2024-11-26 19:07:14.770271] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.221 [2024-11-26 19:07:14.770423] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.221 [2024-11-26 19:07:14.770878] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:48.221 [2024-11-26 19:07:14.770897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:48.221 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.221 19:07:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84349 00:17:48.221 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84349 ']' 00:17:48.221 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 84349 00:17:48.221 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:48.221 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:48.221 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84349 00:17:48.221 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:48.221 killing process with pid 84349 00:17:48.221 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:48.221 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84349' 00:17:48.221 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 84349 00:17:48.221 [2024-11-26 19:07:14.811789] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:48.221 19:07:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 84349 00:17:48.788 [2024-11-26 19:07:15.188682] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:50.163 19:07:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:50.163 00:17:50.163 real 0m13.630s 00:17:50.163 user 0m22.371s 00:17:50.163 sys 0m2.045s 00:17:50.163 19:07:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:50.163 ************************************ 00:17:50.163 END TEST raid5f_state_function_test_sb 00:17:50.163 ************************************ 00:17:50.163 19:07:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.163 19:07:16 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:17:50.163 19:07:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:50.163 19:07:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:50.163 19:07:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:50.163 ************************************ 00:17:50.163 START TEST raid5f_superblock_test 00:17:50.163 ************************************ 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85031 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85031 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 85031 ']' 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.163 19:07:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.163 [2024-11-26 19:07:16.568408] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:17:50.163 [2024-11-26 19:07:16.568636] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85031 ] 00:17:50.163 [2024-11-26 19:07:16.765004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.420 [2024-11-26 19:07:16.916214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.677 [2024-11-26 19:07:17.139585] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.677 [2024-11-26 19:07:17.139635] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.932 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.932 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:50.932 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:50.932 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:50.932 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:50.932 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:50.932 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:50.932 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:50.932 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:50.932 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:50.932 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:50.932 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.932 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.188 malloc1 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.188 [2024-11-26 19:07:17.604177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:51.188 [2024-11-26 19:07:17.604315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.188 [2024-11-26 19:07:17.604351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:51.188 [2024-11-26 19:07:17.604368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.188 [2024-11-26 19:07:17.607630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.188 [2024-11-26 19:07:17.607671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:51.188 pt1 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.188 malloc2 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.188 [2024-11-26 19:07:17.665505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:51.188 [2024-11-26 19:07:17.665585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.188 [2024-11-26 19:07:17.665635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:51.188 [2024-11-26 19:07:17.665648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.188 [2024-11-26 19:07:17.668788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.188 [2024-11-26 19:07:17.668856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:51.188 pt2 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.188 malloc3 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.188 [2024-11-26 19:07:17.737713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:51.188 [2024-11-26 19:07:17.737936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.188 [2024-11-26 19:07:17.738016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:51.188 [2024-11-26 19:07:17.738142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.188 [2024-11-26 19:07:17.741257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.188 [2024-11-26 19:07:17.741434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:51.188 pt3 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.188 malloc4 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.188 [2024-11-26 19:07:17.798702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:51.188 [2024-11-26 19:07:17.798785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.188 [2024-11-26 19:07:17.798815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:51.188 [2024-11-26 19:07:17.798828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.188 [2024-11-26 19:07:17.802014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.188 [2024-11-26 19:07:17.802057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:51.188 pt4 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.188 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.189 [2024-11-26 19:07:17.806972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:51.444 [2024-11-26 19:07:17.809893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:51.444 [2024-11-26 19:07:17.810172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:51.444 [2024-11-26 19:07:17.810445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:51.444 [2024-11-26 19:07:17.810856] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:51.444 [2024-11-26 19:07:17.811009] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:51.444 [2024-11-26 19:07:17.811416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:51.444 [2024-11-26 19:07:17.818287] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:51.444 [2024-11-26 19:07:17.818516] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:51.444 [2024-11-26 19:07:17.818930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.444 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.444 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:51.444 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.444 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.444 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.444 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.444 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.444 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.444 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.444 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.444 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.444 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.444 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.444 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.444 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.445 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.445 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.445 "name": "raid_bdev1", 00:17:51.445 "uuid": "2effc380-069a-4640-bfcd-cc4725dd87f7", 00:17:51.445 "strip_size_kb": 64, 00:17:51.445 "state": "online", 00:17:51.445 "raid_level": "raid5f", 00:17:51.445 "superblock": true, 00:17:51.445 "num_base_bdevs": 4, 00:17:51.445 "num_base_bdevs_discovered": 4, 00:17:51.445 "num_base_bdevs_operational": 4, 00:17:51.445 "base_bdevs_list": [ 00:17:51.445 { 00:17:51.445 "name": "pt1", 00:17:51.445 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:51.445 "is_configured": true, 00:17:51.445 "data_offset": 2048, 00:17:51.445 "data_size": 63488 00:17:51.445 }, 00:17:51.445 { 00:17:51.445 "name": "pt2", 00:17:51.445 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.445 "is_configured": true, 00:17:51.445 "data_offset": 2048, 00:17:51.445 "data_size": 63488 00:17:51.445 }, 00:17:51.445 { 00:17:51.445 "name": "pt3", 00:17:51.445 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:51.445 "is_configured": true, 00:17:51.445 "data_offset": 2048, 00:17:51.445 "data_size": 63488 00:17:51.445 }, 00:17:51.445 { 00:17:51.445 "name": "pt4", 00:17:51.445 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:51.445 "is_configured": true, 00:17:51.445 "data_offset": 2048, 00:17:51.445 "data_size": 63488 00:17:51.445 } 00:17:51.445 ] 00:17:51.445 }' 00:17:51.445 19:07:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.445 19:07:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.008 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:52.008 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:52.008 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:52.008 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:52.008 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:52.008 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:52.008 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:52.008 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.008 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.008 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:52.008 [2024-11-26 19:07:18.355920] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.008 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.008 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:52.008 "name": "raid_bdev1", 00:17:52.008 "aliases": [ 00:17:52.008 "2effc380-069a-4640-bfcd-cc4725dd87f7" 00:17:52.008 ], 00:17:52.008 "product_name": "Raid Volume", 00:17:52.008 "block_size": 512, 00:17:52.008 "num_blocks": 190464, 00:17:52.008 "uuid": "2effc380-069a-4640-bfcd-cc4725dd87f7", 00:17:52.008 "assigned_rate_limits": { 00:17:52.008 "rw_ios_per_sec": 0, 00:17:52.008 "rw_mbytes_per_sec": 0, 00:17:52.008 "r_mbytes_per_sec": 0, 00:17:52.008 "w_mbytes_per_sec": 0 00:17:52.008 }, 00:17:52.008 "claimed": false, 00:17:52.008 "zoned": false, 00:17:52.008 "supported_io_types": { 00:17:52.008 "read": true, 00:17:52.008 "write": true, 00:17:52.008 "unmap": false, 00:17:52.008 "flush": false, 00:17:52.008 "reset": true, 00:17:52.008 "nvme_admin": false, 00:17:52.008 "nvme_io": false, 00:17:52.008 "nvme_io_md": false, 00:17:52.008 "write_zeroes": true, 00:17:52.008 "zcopy": false, 00:17:52.008 "get_zone_info": false, 00:17:52.008 "zone_management": false, 00:17:52.008 "zone_append": false, 00:17:52.008 "compare": false, 00:17:52.008 "compare_and_write": false, 00:17:52.008 "abort": false, 00:17:52.008 "seek_hole": false, 00:17:52.008 "seek_data": false, 00:17:52.008 "copy": false, 00:17:52.008 "nvme_iov_md": false 00:17:52.008 }, 00:17:52.008 "driver_specific": { 00:17:52.008 "raid": { 00:17:52.008 "uuid": "2effc380-069a-4640-bfcd-cc4725dd87f7", 00:17:52.008 "strip_size_kb": 64, 00:17:52.008 "state": "online", 00:17:52.008 "raid_level": "raid5f", 00:17:52.008 "superblock": true, 00:17:52.008 "num_base_bdevs": 4, 00:17:52.008 "num_base_bdevs_discovered": 4, 00:17:52.008 "num_base_bdevs_operational": 4, 00:17:52.008 "base_bdevs_list": [ 00:17:52.008 { 00:17:52.008 "name": "pt1", 00:17:52.008 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:52.008 "is_configured": true, 00:17:52.008 "data_offset": 2048, 00:17:52.008 "data_size": 63488 00:17:52.008 }, 00:17:52.008 { 00:17:52.008 "name": "pt2", 00:17:52.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.008 "is_configured": true, 00:17:52.008 "data_offset": 2048, 00:17:52.008 "data_size": 63488 00:17:52.008 }, 00:17:52.008 { 00:17:52.008 "name": "pt3", 00:17:52.008 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:52.008 "is_configured": true, 00:17:52.008 "data_offset": 2048, 00:17:52.008 "data_size": 63488 00:17:52.008 }, 00:17:52.008 { 00:17:52.008 "name": "pt4", 00:17:52.008 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:52.008 "is_configured": true, 00:17:52.008 "data_offset": 2048, 00:17:52.008 "data_size": 63488 00:17:52.008 } 00:17:52.008 ] 00:17:52.008 } 00:17:52.008 } 00:17:52.008 }' 00:17:52.008 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:52.008 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:52.008 pt2 00:17:52.008 pt3 00:17:52.008 pt4' 00:17:52.008 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.008 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:52.008 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.008 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:52.008 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.008 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.008 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.009 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.009 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:52.009 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:52.009 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.009 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.009 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:52.009 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.009 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.009 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.009 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:52.009 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:52.009 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.009 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:52.009 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.009 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.009 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:52.266 [2024-11-26 19:07:18.735916] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2effc380-069a-4640-bfcd-cc4725dd87f7 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2effc380-069a-4640-bfcd-cc4725dd87f7 ']' 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.266 [2024-11-26 19:07:18.783692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.266 [2024-11-26 19:07:18.783859] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:52.266 [2024-11-26 19:07:18.783986] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.266 [2024-11-26 19:07:18.784110] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.266 [2024-11-26 19:07:18.784135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.266 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:52.523 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.523 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:52.523 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:52.523 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:17:52.523 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:52.523 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:52.523 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.523 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:52.523 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.523 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:52.523 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.523 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.523 [2024-11-26 19:07:18.947770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:52.523 [2024-11-26 19:07:18.950668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:52.523 [2024-11-26 19:07:18.950751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:52.523 [2024-11-26 19:07:18.950807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:52.523 [2024-11-26 19:07:18.950895] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:52.523 [2024-11-26 19:07:18.951003] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:52.523 [2024-11-26 19:07:18.951036] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:52.523 [2024-11-26 19:07:18.951066] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:52.523 [2024-11-26 19:07:18.951087] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.523 [2024-11-26 19:07:18.951104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:52.523 request: 00:17:52.523 { 00:17:52.523 "name": "raid_bdev1", 00:17:52.523 "raid_level": "raid5f", 00:17:52.523 "base_bdevs": [ 00:17:52.523 "malloc1", 00:17:52.523 "malloc2", 00:17:52.523 "malloc3", 00:17:52.523 "malloc4" 00:17:52.523 ], 00:17:52.523 "strip_size_kb": 64, 00:17:52.523 "superblock": false, 00:17:52.523 "method": "bdev_raid_create", 00:17:52.523 "req_id": 1 00:17:52.523 } 00:17:52.523 Got JSON-RPC error response 00:17:52.523 response: 00:17:52.523 { 00:17:52.523 "code": -17, 00:17:52.523 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:52.523 } 00:17:52.523 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:52.523 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:17:52.523 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:52.523 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:52.523 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:52.523 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.523 19:07:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:52.523 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.523 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.524 19:07:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.524 [2024-11-26 19:07:19.015928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:52.524 [2024-11-26 19:07:19.015993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.524 [2024-11-26 19:07:19.016020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:52.524 [2024-11-26 19:07:19.016036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.524 [2024-11-26 19:07:19.019393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.524 [2024-11-26 19:07:19.019460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:52.524 [2024-11-26 19:07:19.019561] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:52.524 [2024-11-26 19:07:19.019646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:52.524 pt1 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.524 "name": "raid_bdev1", 00:17:52.524 "uuid": "2effc380-069a-4640-bfcd-cc4725dd87f7", 00:17:52.524 "strip_size_kb": 64, 00:17:52.524 "state": "configuring", 00:17:52.524 "raid_level": "raid5f", 00:17:52.524 "superblock": true, 00:17:52.524 "num_base_bdevs": 4, 00:17:52.524 "num_base_bdevs_discovered": 1, 00:17:52.524 "num_base_bdevs_operational": 4, 00:17:52.524 "base_bdevs_list": [ 00:17:52.524 { 00:17:52.524 "name": "pt1", 00:17:52.524 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:52.524 "is_configured": true, 00:17:52.524 "data_offset": 2048, 00:17:52.524 "data_size": 63488 00:17:52.524 }, 00:17:52.524 { 00:17:52.524 "name": null, 00:17:52.524 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.524 "is_configured": false, 00:17:52.524 "data_offset": 2048, 00:17:52.524 "data_size": 63488 00:17:52.524 }, 00:17:52.524 { 00:17:52.524 "name": null, 00:17:52.524 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:52.524 "is_configured": false, 00:17:52.524 "data_offset": 2048, 00:17:52.524 "data_size": 63488 00:17:52.524 }, 00:17:52.524 { 00:17:52.524 "name": null, 00:17:52.524 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:52.524 "is_configured": false, 00:17:52.524 "data_offset": 2048, 00:17:52.524 "data_size": 63488 00:17:52.524 } 00:17:52.524 ] 00:17:52.524 }' 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.524 19:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.087 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:53.087 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:53.087 19:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.087 19:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.087 [2024-11-26 19:07:19.556128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:53.087 [2024-11-26 19:07:19.556237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.087 [2024-11-26 19:07:19.556266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:53.087 [2024-11-26 19:07:19.556283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.087 [2024-11-26 19:07:19.556950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.087 [2024-11-26 19:07:19.556990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:53.087 [2024-11-26 19:07:19.557106] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:53.087 [2024-11-26 19:07:19.557169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:53.087 pt2 00:17:53.087 19:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.087 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:53.087 19:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.087 19:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.087 [2024-11-26 19:07:19.564101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:53.087 19:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.087 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:53.087 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.087 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:53.087 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:53.088 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.088 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:53.088 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.088 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.088 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.088 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.088 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.088 19:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.088 19:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.088 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.088 19:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.088 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.088 "name": "raid_bdev1", 00:17:53.088 "uuid": "2effc380-069a-4640-bfcd-cc4725dd87f7", 00:17:53.088 "strip_size_kb": 64, 00:17:53.088 "state": "configuring", 00:17:53.088 "raid_level": "raid5f", 00:17:53.088 "superblock": true, 00:17:53.088 "num_base_bdevs": 4, 00:17:53.088 "num_base_bdevs_discovered": 1, 00:17:53.088 "num_base_bdevs_operational": 4, 00:17:53.088 "base_bdevs_list": [ 00:17:53.088 { 00:17:53.088 "name": "pt1", 00:17:53.088 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:53.088 "is_configured": true, 00:17:53.088 "data_offset": 2048, 00:17:53.088 "data_size": 63488 00:17:53.088 }, 00:17:53.088 { 00:17:53.088 "name": null, 00:17:53.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.088 "is_configured": false, 00:17:53.088 "data_offset": 0, 00:17:53.088 "data_size": 63488 00:17:53.088 }, 00:17:53.088 { 00:17:53.088 "name": null, 00:17:53.088 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:53.088 "is_configured": false, 00:17:53.088 "data_offset": 2048, 00:17:53.088 "data_size": 63488 00:17:53.088 }, 00:17:53.088 { 00:17:53.088 "name": null, 00:17:53.088 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:53.088 "is_configured": false, 00:17:53.088 "data_offset": 2048, 00:17:53.088 "data_size": 63488 00:17:53.088 } 00:17:53.088 ] 00:17:53.088 }' 00:17:53.088 19:07:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.088 19:07:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.655 [2024-11-26 19:07:20.104333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:53.655 [2024-11-26 19:07:20.104431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.655 [2024-11-26 19:07:20.104464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:53.655 [2024-11-26 19:07:20.104479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.655 [2024-11-26 19:07:20.105167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.655 [2024-11-26 19:07:20.105200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:53.655 [2024-11-26 19:07:20.105409] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:53.655 [2024-11-26 19:07:20.105444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:53.655 pt2 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.655 [2024-11-26 19:07:20.116232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:53.655 [2024-11-26 19:07:20.116328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.655 [2024-11-26 19:07:20.116364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:53.655 [2024-11-26 19:07:20.116379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.655 [2024-11-26 19:07:20.116893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.655 [2024-11-26 19:07:20.116926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:53.655 [2024-11-26 19:07:20.117006] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:53.655 [2024-11-26 19:07:20.117041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:53.655 pt3 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.655 [2024-11-26 19:07:20.124219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:53.655 [2024-11-26 19:07:20.124268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.655 [2024-11-26 19:07:20.124305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:53.655 [2024-11-26 19:07:20.124321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.655 [2024-11-26 19:07:20.124812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.655 [2024-11-26 19:07:20.124869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:53.655 [2024-11-26 19:07:20.124955] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:53.655 [2024-11-26 19:07:20.124987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:53.655 [2024-11-26 19:07:20.125178] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:53.655 [2024-11-26 19:07:20.125232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:53.655 [2024-11-26 19:07:20.125617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:53.655 [2024-11-26 19:07:20.132245] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:53.655 [2024-11-26 19:07:20.132332] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:53.655 [2024-11-26 19:07:20.132563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.655 pt4 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.655 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.655 "name": "raid_bdev1", 00:17:53.655 "uuid": "2effc380-069a-4640-bfcd-cc4725dd87f7", 00:17:53.655 "strip_size_kb": 64, 00:17:53.655 "state": "online", 00:17:53.656 "raid_level": "raid5f", 00:17:53.656 "superblock": true, 00:17:53.656 "num_base_bdevs": 4, 00:17:53.656 "num_base_bdevs_discovered": 4, 00:17:53.656 "num_base_bdevs_operational": 4, 00:17:53.656 "base_bdevs_list": [ 00:17:53.656 { 00:17:53.656 "name": "pt1", 00:17:53.656 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:53.656 "is_configured": true, 00:17:53.656 "data_offset": 2048, 00:17:53.656 "data_size": 63488 00:17:53.656 }, 00:17:53.656 { 00:17:53.656 "name": "pt2", 00:17:53.656 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.656 "is_configured": true, 00:17:53.656 "data_offset": 2048, 00:17:53.656 "data_size": 63488 00:17:53.656 }, 00:17:53.656 { 00:17:53.656 "name": "pt3", 00:17:53.656 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:53.656 "is_configured": true, 00:17:53.656 "data_offset": 2048, 00:17:53.656 "data_size": 63488 00:17:53.656 }, 00:17:53.656 { 00:17:53.656 "name": "pt4", 00:17:53.656 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:53.656 "is_configured": true, 00:17:53.656 "data_offset": 2048, 00:17:53.656 "data_size": 63488 00:17:53.656 } 00:17:53.656 ] 00:17:53.656 }' 00:17:53.656 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.656 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.222 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:54.222 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:54.222 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:54.222 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:54.222 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:54.222 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:54.222 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:54.222 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.222 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:54.222 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.222 [2024-11-26 19:07:20.661144] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.222 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.222 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:54.222 "name": "raid_bdev1", 00:17:54.222 "aliases": [ 00:17:54.222 "2effc380-069a-4640-bfcd-cc4725dd87f7" 00:17:54.222 ], 00:17:54.222 "product_name": "Raid Volume", 00:17:54.222 "block_size": 512, 00:17:54.222 "num_blocks": 190464, 00:17:54.222 "uuid": "2effc380-069a-4640-bfcd-cc4725dd87f7", 00:17:54.222 "assigned_rate_limits": { 00:17:54.222 "rw_ios_per_sec": 0, 00:17:54.222 "rw_mbytes_per_sec": 0, 00:17:54.222 "r_mbytes_per_sec": 0, 00:17:54.222 "w_mbytes_per_sec": 0 00:17:54.222 }, 00:17:54.222 "claimed": false, 00:17:54.222 "zoned": false, 00:17:54.222 "supported_io_types": { 00:17:54.222 "read": true, 00:17:54.222 "write": true, 00:17:54.222 "unmap": false, 00:17:54.222 "flush": false, 00:17:54.222 "reset": true, 00:17:54.222 "nvme_admin": false, 00:17:54.222 "nvme_io": false, 00:17:54.222 "nvme_io_md": false, 00:17:54.222 "write_zeroes": true, 00:17:54.222 "zcopy": false, 00:17:54.222 "get_zone_info": false, 00:17:54.222 "zone_management": false, 00:17:54.222 "zone_append": false, 00:17:54.222 "compare": false, 00:17:54.222 "compare_and_write": false, 00:17:54.222 "abort": false, 00:17:54.222 "seek_hole": false, 00:17:54.222 "seek_data": false, 00:17:54.222 "copy": false, 00:17:54.222 "nvme_iov_md": false 00:17:54.222 }, 00:17:54.222 "driver_specific": { 00:17:54.222 "raid": { 00:17:54.222 "uuid": "2effc380-069a-4640-bfcd-cc4725dd87f7", 00:17:54.222 "strip_size_kb": 64, 00:17:54.222 "state": "online", 00:17:54.222 "raid_level": "raid5f", 00:17:54.222 "superblock": true, 00:17:54.222 "num_base_bdevs": 4, 00:17:54.222 "num_base_bdevs_discovered": 4, 00:17:54.222 "num_base_bdevs_operational": 4, 00:17:54.222 "base_bdevs_list": [ 00:17:54.222 { 00:17:54.222 "name": "pt1", 00:17:54.222 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:54.222 "is_configured": true, 00:17:54.222 "data_offset": 2048, 00:17:54.222 "data_size": 63488 00:17:54.222 }, 00:17:54.222 { 00:17:54.222 "name": "pt2", 00:17:54.222 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:54.222 "is_configured": true, 00:17:54.222 "data_offset": 2048, 00:17:54.222 "data_size": 63488 00:17:54.222 }, 00:17:54.222 { 00:17:54.222 "name": "pt3", 00:17:54.222 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:54.222 "is_configured": true, 00:17:54.222 "data_offset": 2048, 00:17:54.222 "data_size": 63488 00:17:54.222 }, 00:17:54.222 { 00:17:54.222 "name": "pt4", 00:17:54.222 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:54.222 "is_configured": true, 00:17:54.222 "data_offset": 2048, 00:17:54.222 "data_size": 63488 00:17:54.222 } 00:17:54.222 ] 00:17:54.222 } 00:17:54.222 } 00:17:54.222 }' 00:17:54.222 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:54.222 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:54.222 pt2 00:17:54.222 pt3 00:17:54.222 pt4' 00:17:54.222 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.222 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:54.222 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.222 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:54.222 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.222 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.222 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.480 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.480 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:54.480 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:54.480 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.480 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:54.480 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.480 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.480 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.480 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.480 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:54.480 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:54.480 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.480 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.481 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:54.481 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.481 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.481 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.481 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:54.481 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:54.481 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.481 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:54.481 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.481 19:07:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.481 19:07:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.481 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.481 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:54.481 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:54.481 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:54.481 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.481 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.481 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:54.481 [2024-11-26 19:07:21.049199] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.481 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.481 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2effc380-069a-4640-bfcd-cc4725dd87f7 '!=' 2effc380-069a-4640-bfcd-cc4725dd87f7 ']' 00:17:54.481 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:54.481 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:54.481 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:54.481 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:54.481 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.481 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.739 [2024-11-26 19:07:21.105074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:54.739 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.739 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:54.739 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.739 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.739 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:54.739 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:54.739 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:54.739 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.739 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.739 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.739 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.739 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.739 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.739 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.739 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.739 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.739 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.739 "name": "raid_bdev1", 00:17:54.739 "uuid": "2effc380-069a-4640-bfcd-cc4725dd87f7", 00:17:54.739 "strip_size_kb": 64, 00:17:54.739 "state": "online", 00:17:54.739 "raid_level": "raid5f", 00:17:54.739 "superblock": true, 00:17:54.739 "num_base_bdevs": 4, 00:17:54.739 "num_base_bdevs_discovered": 3, 00:17:54.739 "num_base_bdevs_operational": 3, 00:17:54.739 "base_bdevs_list": [ 00:17:54.739 { 00:17:54.739 "name": null, 00:17:54.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.739 "is_configured": false, 00:17:54.739 "data_offset": 0, 00:17:54.739 "data_size": 63488 00:17:54.739 }, 00:17:54.739 { 00:17:54.739 "name": "pt2", 00:17:54.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:54.739 "is_configured": true, 00:17:54.739 "data_offset": 2048, 00:17:54.739 "data_size": 63488 00:17:54.739 }, 00:17:54.739 { 00:17:54.739 "name": "pt3", 00:17:54.739 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:54.739 "is_configured": true, 00:17:54.739 "data_offset": 2048, 00:17:54.739 "data_size": 63488 00:17:54.739 }, 00:17:54.739 { 00:17:54.739 "name": "pt4", 00:17:54.739 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:54.739 "is_configured": true, 00:17:54.739 "data_offset": 2048, 00:17:54.739 "data_size": 63488 00:17:54.739 } 00:17:54.739 ] 00:17:54.739 }' 00:17:54.739 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.739 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.307 [2024-11-26 19:07:21.653159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:55.307 [2024-11-26 19:07:21.653245] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:55.307 [2024-11-26 19:07:21.653381] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.307 [2024-11-26 19:07:21.653494] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:55.307 [2024-11-26 19:07:21.653515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.307 [2024-11-26 19:07:21.749152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:55.307 [2024-11-26 19:07:21.749252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.307 [2024-11-26 19:07:21.749281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:55.307 [2024-11-26 19:07:21.749320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.307 [2024-11-26 19:07:21.752536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.307 [2024-11-26 19:07:21.752579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:55.307 [2024-11-26 19:07:21.752722] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:55.307 [2024-11-26 19:07:21.752792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:55.307 pt2 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.307 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.307 "name": "raid_bdev1", 00:17:55.308 "uuid": "2effc380-069a-4640-bfcd-cc4725dd87f7", 00:17:55.308 "strip_size_kb": 64, 00:17:55.308 "state": "configuring", 00:17:55.308 "raid_level": "raid5f", 00:17:55.308 "superblock": true, 00:17:55.308 "num_base_bdevs": 4, 00:17:55.308 "num_base_bdevs_discovered": 1, 00:17:55.308 "num_base_bdevs_operational": 3, 00:17:55.308 "base_bdevs_list": [ 00:17:55.308 { 00:17:55.308 "name": null, 00:17:55.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.308 "is_configured": false, 00:17:55.308 "data_offset": 2048, 00:17:55.308 "data_size": 63488 00:17:55.308 }, 00:17:55.308 { 00:17:55.308 "name": "pt2", 00:17:55.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:55.308 "is_configured": true, 00:17:55.308 "data_offset": 2048, 00:17:55.308 "data_size": 63488 00:17:55.308 }, 00:17:55.308 { 00:17:55.308 "name": null, 00:17:55.308 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:55.308 "is_configured": false, 00:17:55.308 "data_offset": 2048, 00:17:55.308 "data_size": 63488 00:17:55.308 }, 00:17:55.308 { 00:17:55.308 "name": null, 00:17:55.308 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:55.308 "is_configured": false, 00:17:55.308 "data_offset": 2048, 00:17:55.308 "data_size": 63488 00:17:55.308 } 00:17:55.308 ] 00:17:55.308 }' 00:17:55.308 19:07:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.308 19:07:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.875 [2024-11-26 19:07:22.301426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:55.875 [2024-11-26 19:07:22.301712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.875 [2024-11-26 19:07:22.301759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:55.875 [2024-11-26 19:07:22.301775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.875 [2024-11-26 19:07:22.302488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.875 [2024-11-26 19:07:22.302513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:55.875 [2024-11-26 19:07:22.302643] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:55.875 [2024-11-26 19:07:22.302692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:55.875 pt3 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.875 "name": "raid_bdev1", 00:17:55.875 "uuid": "2effc380-069a-4640-bfcd-cc4725dd87f7", 00:17:55.875 "strip_size_kb": 64, 00:17:55.875 "state": "configuring", 00:17:55.875 "raid_level": "raid5f", 00:17:55.875 "superblock": true, 00:17:55.875 "num_base_bdevs": 4, 00:17:55.875 "num_base_bdevs_discovered": 2, 00:17:55.875 "num_base_bdevs_operational": 3, 00:17:55.875 "base_bdevs_list": [ 00:17:55.875 { 00:17:55.875 "name": null, 00:17:55.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.875 "is_configured": false, 00:17:55.875 "data_offset": 2048, 00:17:55.875 "data_size": 63488 00:17:55.875 }, 00:17:55.875 { 00:17:55.875 "name": "pt2", 00:17:55.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:55.875 "is_configured": true, 00:17:55.875 "data_offset": 2048, 00:17:55.875 "data_size": 63488 00:17:55.875 }, 00:17:55.875 { 00:17:55.875 "name": "pt3", 00:17:55.875 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:55.875 "is_configured": true, 00:17:55.875 "data_offset": 2048, 00:17:55.875 "data_size": 63488 00:17:55.875 }, 00:17:55.875 { 00:17:55.875 "name": null, 00:17:55.875 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:55.875 "is_configured": false, 00:17:55.875 "data_offset": 2048, 00:17:55.875 "data_size": 63488 00:17:55.875 } 00:17:55.875 ] 00:17:55.875 }' 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.875 19:07:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.442 [2024-11-26 19:07:22.837616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:56.442 [2024-11-26 19:07:22.837743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.442 [2024-11-26 19:07:22.837779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:56.442 [2024-11-26 19:07:22.837793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.442 [2024-11-26 19:07:22.838562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.442 [2024-11-26 19:07:22.838598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:56.442 [2024-11-26 19:07:22.838718] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:56.442 [2024-11-26 19:07:22.838841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:56.442 [2024-11-26 19:07:22.839047] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:56.442 [2024-11-26 19:07:22.839064] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:56.442 [2024-11-26 19:07:22.839474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:56.442 [2024-11-26 19:07:22.846665] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:56.442 [2024-11-26 19:07:22.846723] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:56.442 [2024-11-26 19:07:22.847081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.442 pt4 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.442 "name": "raid_bdev1", 00:17:56.442 "uuid": "2effc380-069a-4640-bfcd-cc4725dd87f7", 00:17:56.442 "strip_size_kb": 64, 00:17:56.442 "state": "online", 00:17:56.442 "raid_level": "raid5f", 00:17:56.442 "superblock": true, 00:17:56.442 "num_base_bdevs": 4, 00:17:56.442 "num_base_bdevs_discovered": 3, 00:17:56.442 "num_base_bdevs_operational": 3, 00:17:56.442 "base_bdevs_list": [ 00:17:56.442 { 00:17:56.442 "name": null, 00:17:56.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.442 "is_configured": false, 00:17:56.442 "data_offset": 2048, 00:17:56.442 "data_size": 63488 00:17:56.442 }, 00:17:56.442 { 00:17:56.442 "name": "pt2", 00:17:56.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.442 "is_configured": true, 00:17:56.442 "data_offset": 2048, 00:17:56.442 "data_size": 63488 00:17:56.442 }, 00:17:56.442 { 00:17:56.442 "name": "pt3", 00:17:56.442 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:56.442 "is_configured": true, 00:17:56.442 "data_offset": 2048, 00:17:56.442 "data_size": 63488 00:17:56.442 }, 00:17:56.442 { 00:17:56.442 "name": "pt4", 00:17:56.442 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:56.442 "is_configured": true, 00:17:56.442 "data_offset": 2048, 00:17:56.442 "data_size": 63488 00:17:56.442 } 00:17:56.442 ] 00:17:56.442 }' 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.442 19:07:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.009 [2024-11-26 19:07:23.383721] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.009 [2024-11-26 19:07:23.383763] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:57.009 [2024-11-26 19:07:23.383899] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.009 [2024-11-26 19:07:23.384002] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.009 [2024-11-26 19:07:23.384024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.009 [2024-11-26 19:07:23.459703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:57.009 [2024-11-26 19:07:23.459810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.009 [2024-11-26 19:07:23.459872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:57.009 [2024-11-26 19:07:23.459893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.009 [2024-11-26 19:07:23.463197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.009 [2024-11-26 19:07:23.463291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:57.009 [2024-11-26 19:07:23.463467] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:57.009 [2024-11-26 19:07:23.463537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:57.009 [2024-11-26 19:07:23.463741] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:57.009 [2024-11-26 19:07:23.463770] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.009 [2024-11-26 19:07:23.463793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:57.009 [2024-11-26 19:07:23.463870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:57.009 [2024-11-26 19:07:23.464062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:57.009 pt1 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.009 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.010 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.010 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.010 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.010 19:07:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.010 19:07:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.010 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.010 19:07:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.010 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.010 "name": "raid_bdev1", 00:17:57.010 "uuid": "2effc380-069a-4640-bfcd-cc4725dd87f7", 00:17:57.010 "strip_size_kb": 64, 00:17:57.010 "state": "configuring", 00:17:57.010 "raid_level": "raid5f", 00:17:57.010 "superblock": true, 00:17:57.010 "num_base_bdevs": 4, 00:17:57.010 "num_base_bdevs_discovered": 2, 00:17:57.010 "num_base_bdevs_operational": 3, 00:17:57.010 "base_bdevs_list": [ 00:17:57.010 { 00:17:57.010 "name": null, 00:17:57.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.010 "is_configured": false, 00:17:57.010 "data_offset": 2048, 00:17:57.010 "data_size": 63488 00:17:57.010 }, 00:17:57.010 { 00:17:57.010 "name": "pt2", 00:17:57.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.010 "is_configured": true, 00:17:57.010 "data_offset": 2048, 00:17:57.010 "data_size": 63488 00:17:57.010 }, 00:17:57.010 { 00:17:57.010 "name": "pt3", 00:17:57.010 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:57.010 "is_configured": true, 00:17:57.010 "data_offset": 2048, 00:17:57.010 "data_size": 63488 00:17:57.010 }, 00:17:57.010 { 00:17:57.010 "name": null, 00:17:57.010 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:57.010 "is_configured": false, 00:17:57.010 "data_offset": 2048, 00:17:57.010 "data_size": 63488 00:17:57.010 } 00:17:57.010 ] 00:17:57.010 }' 00:17:57.010 19:07:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.010 19:07:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.578 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:57.578 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:57.578 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.578 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.578 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.578 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:57.578 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:57.578 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.578 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.578 [2024-11-26 19:07:24.088143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:57.578 [2024-11-26 19:07:24.088249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.578 [2024-11-26 19:07:24.088341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:57.578 [2024-11-26 19:07:24.088359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.579 [2024-11-26 19:07:24.089057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.579 [2024-11-26 19:07:24.089082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:57.579 [2024-11-26 19:07:24.089205] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:57.579 [2024-11-26 19:07:24.089241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:57.579 [2024-11-26 19:07:24.089733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:57.579 [2024-11-26 19:07:24.089883] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:57.579 [2024-11-26 19:07:24.090301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:57.579 [2024-11-26 19:07:24.097713] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:57.579 [2024-11-26 19:07:24.097897] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:57.579 [2024-11-26 19:07:24.098469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.579 pt4 00:17:57.579 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.579 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:57.579 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.579 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.579 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.579 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.579 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:57.579 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.579 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.579 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.579 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.579 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.579 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.579 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.579 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.579 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.579 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.579 "name": "raid_bdev1", 00:17:57.579 "uuid": "2effc380-069a-4640-bfcd-cc4725dd87f7", 00:17:57.579 "strip_size_kb": 64, 00:17:57.579 "state": "online", 00:17:57.579 "raid_level": "raid5f", 00:17:57.579 "superblock": true, 00:17:57.579 "num_base_bdevs": 4, 00:17:57.579 "num_base_bdevs_discovered": 3, 00:17:57.579 "num_base_bdevs_operational": 3, 00:17:57.579 "base_bdevs_list": [ 00:17:57.579 { 00:17:57.579 "name": null, 00:17:57.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.579 "is_configured": false, 00:17:57.579 "data_offset": 2048, 00:17:57.579 "data_size": 63488 00:17:57.579 }, 00:17:57.579 { 00:17:57.579 "name": "pt2", 00:17:57.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.579 "is_configured": true, 00:17:57.579 "data_offset": 2048, 00:17:57.579 "data_size": 63488 00:17:57.579 }, 00:17:57.579 { 00:17:57.579 "name": "pt3", 00:17:57.579 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:57.579 "is_configured": true, 00:17:57.579 "data_offset": 2048, 00:17:57.579 "data_size": 63488 00:17:57.579 }, 00:17:57.579 { 00:17:57.579 "name": "pt4", 00:17:57.579 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:57.579 "is_configured": true, 00:17:57.579 "data_offset": 2048, 00:17:57.579 "data_size": 63488 00:17:57.579 } 00:17:57.579 ] 00:17:57.579 }' 00:17:57.579 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.579 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.146 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:58.146 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.146 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.146 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:58.146 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.146 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:58.146 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:58.146 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.146 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.146 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:58.146 [2024-11-26 19:07:24.735408] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.146 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.404 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 2effc380-069a-4640-bfcd-cc4725dd87f7 '!=' 2effc380-069a-4640-bfcd-cc4725dd87f7 ']' 00:17:58.404 19:07:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85031 00:17:58.404 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 85031 ']' 00:17:58.404 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 85031 00:17:58.404 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:58.404 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.404 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85031 00:17:58.404 killing process with pid 85031 00:17:58.404 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:58.404 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:58.404 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85031' 00:17:58.404 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 85031 00:17:58.404 19:07:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 85031 00:17:58.404 [2024-11-26 19:07:24.820059] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:58.404 [2024-11-26 19:07:24.820219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.404 [2024-11-26 19:07:24.820384] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.404 [2024-11-26 19:07:24.820422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:58.662 [2024-11-26 19:07:25.203589] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:00.037 19:07:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:00.037 00:18:00.037 real 0m9.897s 00:18:00.037 user 0m16.105s 00:18:00.037 sys 0m1.533s 00:18:00.037 ************************************ 00:18:00.037 END TEST raid5f_superblock_test 00:18:00.037 ************************************ 00:18:00.038 19:07:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.038 19:07:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.038 19:07:26 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:00.038 19:07:26 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:18:00.038 19:07:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:00.038 19:07:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.038 19:07:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:00.038 ************************************ 00:18:00.038 START TEST raid5f_rebuild_test 00:18:00.038 ************************************ 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85528 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85528 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85528 ']' 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.038 19:07:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.038 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:00.038 Zero copy mechanism will not be used. 00:18:00.038 [2024-11-26 19:07:26.549564] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:18:00.038 [2024-11-26 19:07:26.549770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85528 ] 00:18:00.296 [2024-11-26 19:07:26.745637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.296 [2024-11-26 19:07:26.890343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.555 [2024-11-26 19:07:27.154532] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:00.555 [2024-11-26 19:07:27.154699] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.122 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.122 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:18:01.122 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:01.122 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:01.122 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.122 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.122 BaseBdev1_malloc 00:18:01.122 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.122 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:01.122 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.122 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.122 [2024-11-26 19:07:27.704182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:01.122 [2024-11-26 19:07:27.704677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.122 [2024-11-26 19:07:27.704723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:01.122 [2024-11-26 19:07:27.704744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.122 [2024-11-26 19:07:27.707728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.122 [2024-11-26 19:07:27.707789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:01.122 BaseBdev1 00:18:01.122 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.122 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:01.122 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:01.122 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.122 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.382 BaseBdev2_malloc 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.382 [2024-11-26 19:07:27.759657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:01.382 [2024-11-26 19:07:27.760040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.382 [2024-11-26 19:07:27.760086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:01.382 [2024-11-26 19:07:27.760106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.382 [2024-11-26 19:07:27.763253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.382 BaseBdev2 00:18:01.382 [2024-11-26 19:07:27.763553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.382 BaseBdev3_malloc 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.382 [2024-11-26 19:07:27.821386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:01.382 [2024-11-26 19:07:27.821734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.382 [2024-11-26 19:07:27.821809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:01.382 [2024-11-26 19:07:27.822019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.382 [2024-11-26 19:07:27.825111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.382 BaseBdev3 00:18:01.382 [2024-11-26 19:07:27.825298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.382 BaseBdev4_malloc 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.382 [2024-11-26 19:07:27.881178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:01.382 [2024-11-26 19:07:27.881616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.382 [2024-11-26 19:07:27.881696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:01.382 [2024-11-26 19:07:27.881852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.382 [2024-11-26 19:07:27.884989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.382 BaseBdev4 00:18:01.382 [2024-11-26 19:07:27.885154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.382 spare_malloc 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.382 spare_delay 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.382 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.382 [2024-11-26 19:07:27.948167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:01.382 [2024-11-26 19:07:27.948350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.382 [2024-11-26 19:07:27.948378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:01.382 [2024-11-26 19:07:27.948395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.383 [2024-11-26 19:07:27.951461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.383 [2024-11-26 19:07:27.951531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:01.383 spare 00:18:01.383 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.383 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:01.383 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.383 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.383 [2024-11-26 19:07:27.956396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.383 [2024-11-26 19:07:27.959452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:01.383 [2024-11-26 19:07:27.959702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:01.383 [2024-11-26 19:07:27.959813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:01.383 [2024-11-26 19:07:27.959988] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:01.383 [2024-11-26 19:07:27.960010] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:01.383 [2024-11-26 19:07:27.960409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:01.383 [2024-11-26 19:07:27.967913] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:01.383 [2024-11-26 19:07:27.968048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:01.383 [2024-11-26 19:07:27.968500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.383 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.383 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:01.383 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.383 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.383 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:01.383 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.383 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:01.383 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.383 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.383 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.383 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.383 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.383 19:07:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.383 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.383 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.383 19:07:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.642 19:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.642 "name": "raid_bdev1", 00:18:01.642 "uuid": "dbaf363d-8437-4801-ad6f-e2f5e6cd2d08", 00:18:01.642 "strip_size_kb": 64, 00:18:01.642 "state": "online", 00:18:01.642 "raid_level": "raid5f", 00:18:01.642 "superblock": false, 00:18:01.642 "num_base_bdevs": 4, 00:18:01.642 "num_base_bdevs_discovered": 4, 00:18:01.642 "num_base_bdevs_operational": 4, 00:18:01.642 "base_bdevs_list": [ 00:18:01.642 { 00:18:01.642 "name": "BaseBdev1", 00:18:01.642 "uuid": "6538ba5d-b31e-517e-b8c7-ac406e3119dd", 00:18:01.642 "is_configured": true, 00:18:01.642 "data_offset": 0, 00:18:01.642 "data_size": 65536 00:18:01.642 }, 00:18:01.642 { 00:18:01.642 "name": "BaseBdev2", 00:18:01.642 "uuid": "31f6c354-d0db-54d6-a645-8d8328248f59", 00:18:01.642 "is_configured": true, 00:18:01.642 "data_offset": 0, 00:18:01.642 "data_size": 65536 00:18:01.642 }, 00:18:01.642 { 00:18:01.642 "name": "BaseBdev3", 00:18:01.642 "uuid": "e9b63921-9afb-55c6-a324-7827ccac9d56", 00:18:01.642 "is_configured": true, 00:18:01.642 "data_offset": 0, 00:18:01.642 "data_size": 65536 00:18:01.642 }, 00:18:01.642 { 00:18:01.642 "name": "BaseBdev4", 00:18:01.642 "uuid": "e2a1d893-f8e3-5f5a-9c6c-1c18921d6f35", 00:18:01.642 "is_configured": true, 00:18:01.642 "data_offset": 0, 00:18:01.642 "data_size": 65536 00:18:01.642 } 00:18:01.642 ] 00:18:01.642 }' 00:18:01.642 19:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.642 19:07:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.209 [2024-11-26 19:07:28.545093] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:02.209 19:07:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:02.468 [2024-11-26 19:07:28.968826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:02.468 /dev/nbd0 00:18:02.468 19:07:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:02.468 19:07:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:02.468 19:07:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:02.468 19:07:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:02.468 19:07:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:02.468 19:07:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:02.468 19:07:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:02.468 19:07:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:02.468 19:07:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:02.468 19:07:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:02.468 19:07:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:02.468 1+0 records in 00:18:02.468 1+0 records out 00:18:02.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467205 s, 8.8 MB/s 00:18:02.468 19:07:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.468 19:07:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:02.468 19:07:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.468 19:07:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:02.468 19:07:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:02.468 19:07:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:02.468 19:07:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:02.468 19:07:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:02.468 19:07:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:02.468 19:07:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:02.468 19:07:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:18:03.403 512+0 records in 00:18:03.403 512+0 records out 00:18:03.403 100663296 bytes (101 MB, 96 MiB) copied, 0.698582 s, 144 MB/s 00:18:03.403 19:07:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:03.403 19:07:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:03.403 19:07:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:03.403 19:07:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:03.403 19:07:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:03.403 19:07:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:03.403 19:07:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:03.661 [2024-11-26 19:07:30.074299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.661 [2024-11-26 19:07:30.090398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.661 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.661 "name": "raid_bdev1", 00:18:03.661 "uuid": "dbaf363d-8437-4801-ad6f-e2f5e6cd2d08", 00:18:03.661 "strip_size_kb": 64, 00:18:03.661 "state": "online", 00:18:03.661 "raid_level": "raid5f", 00:18:03.661 "superblock": false, 00:18:03.661 "num_base_bdevs": 4, 00:18:03.661 "num_base_bdevs_discovered": 3, 00:18:03.661 "num_base_bdevs_operational": 3, 00:18:03.661 "base_bdevs_list": [ 00:18:03.661 { 00:18:03.661 "name": null, 00:18:03.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.661 "is_configured": false, 00:18:03.661 "data_offset": 0, 00:18:03.661 "data_size": 65536 00:18:03.661 }, 00:18:03.661 { 00:18:03.661 "name": "BaseBdev2", 00:18:03.661 "uuid": "31f6c354-d0db-54d6-a645-8d8328248f59", 00:18:03.661 "is_configured": true, 00:18:03.661 "data_offset": 0, 00:18:03.661 "data_size": 65536 00:18:03.661 }, 00:18:03.661 { 00:18:03.661 "name": "BaseBdev3", 00:18:03.661 "uuid": "e9b63921-9afb-55c6-a324-7827ccac9d56", 00:18:03.661 "is_configured": true, 00:18:03.661 "data_offset": 0, 00:18:03.661 "data_size": 65536 00:18:03.661 }, 00:18:03.661 { 00:18:03.661 "name": "BaseBdev4", 00:18:03.661 "uuid": "e2a1d893-f8e3-5f5a-9c6c-1c18921d6f35", 00:18:03.661 "is_configured": true, 00:18:03.661 "data_offset": 0, 00:18:03.661 "data_size": 65536 00:18:03.661 } 00:18:03.661 ] 00:18:03.661 }' 00:18:03.662 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.662 19:07:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.229 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:04.229 19:07:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.229 19:07:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.229 [2024-11-26 19:07:30.626795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:04.229 [2024-11-26 19:07:30.642628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:04.229 19:07:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.229 19:07:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:04.229 [2024-11-26 19:07:30.652377] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:05.164 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:05.164 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.164 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:05.164 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:05.164 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.164 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.164 19:07:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.164 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.164 19:07:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.164 19:07:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.164 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.164 "name": "raid_bdev1", 00:18:05.164 "uuid": "dbaf363d-8437-4801-ad6f-e2f5e6cd2d08", 00:18:05.164 "strip_size_kb": 64, 00:18:05.164 "state": "online", 00:18:05.164 "raid_level": "raid5f", 00:18:05.164 "superblock": false, 00:18:05.164 "num_base_bdevs": 4, 00:18:05.164 "num_base_bdevs_discovered": 4, 00:18:05.164 "num_base_bdevs_operational": 4, 00:18:05.164 "process": { 00:18:05.164 "type": "rebuild", 00:18:05.164 "target": "spare", 00:18:05.164 "progress": { 00:18:05.164 "blocks": 17280, 00:18:05.164 "percent": 8 00:18:05.164 } 00:18:05.164 }, 00:18:05.164 "base_bdevs_list": [ 00:18:05.164 { 00:18:05.164 "name": "spare", 00:18:05.164 "uuid": "c20c68d7-9748-58e3-beec-89c68308abeb", 00:18:05.164 "is_configured": true, 00:18:05.164 "data_offset": 0, 00:18:05.164 "data_size": 65536 00:18:05.164 }, 00:18:05.164 { 00:18:05.164 "name": "BaseBdev2", 00:18:05.164 "uuid": "31f6c354-d0db-54d6-a645-8d8328248f59", 00:18:05.164 "is_configured": true, 00:18:05.164 "data_offset": 0, 00:18:05.164 "data_size": 65536 00:18:05.164 }, 00:18:05.164 { 00:18:05.164 "name": "BaseBdev3", 00:18:05.164 "uuid": "e9b63921-9afb-55c6-a324-7827ccac9d56", 00:18:05.164 "is_configured": true, 00:18:05.164 "data_offset": 0, 00:18:05.164 "data_size": 65536 00:18:05.164 }, 00:18:05.164 { 00:18:05.164 "name": "BaseBdev4", 00:18:05.164 "uuid": "e2a1d893-f8e3-5f5a-9c6c-1c18921d6f35", 00:18:05.164 "is_configured": true, 00:18:05.164 "data_offset": 0, 00:18:05.164 "data_size": 65536 00:18:05.164 } 00:18:05.164 ] 00:18:05.164 }' 00:18:05.164 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.164 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:05.164 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.423 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.423 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:05.423 19:07:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.423 19:07:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.423 [2024-11-26 19:07:31.814586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.423 [2024-11-26 19:07:31.869256] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:05.423 [2024-11-26 19:07:31.869589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.423 [2024-11-26 19:07:31.869835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.423 [2024-11-26 19:07:31.869878] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:05.423 19:07:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.423 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:05.423 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.423 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.423 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:05.423 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:05.423 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:05.423 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.424 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.424 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.424 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.424 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.424 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.424 19:07:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.424 19:07:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.424 19:07:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.424 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.424 "name": "raid_bdev1", 00:18:05.424 "uuid": "dbaf363d-8437-4801-ad6f-e2f5e6cd2d08", 00:18:05.424 "strip_size_kb": 64, 00:18:05.424 "state": "online", 00:18:05.424 "raid_level": "raid5f", 00:18:05.424 "superblock": false, 00:18:05.424 "num_base_bdevs": 4, 00:18:05.424 "num_base_bdevs_discovered": 3, 00:18:05.424 "num_base_bdevs_operational": 3, 00:18:05.424 "base_bdevs_list": [ 00:18:05.424 { 00:18:05.424 "name": null, 00:18:05.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.424 "is_configured": false, 00:18:05.424 "data_offset": 0, 00:18:05.424 "data_size": 65536 00:18:05.424 }, 00:18:05.424 { 00:18:05.424 "name": "BaseBdev2", 00:18:05.424 "uuid": "31f6c354-d0db-54d6-a645-8d8328248f59", 00:18:05.424 "is_configured": true, 00:18:05.424 "data_offset": 0, 00:18:05.424 "data_size": 65536 00:18:05.424 }, 00:18:05.424 { 00:18:05.424 "name": "BaseBdev3", 00:18:05.424 "uuid": "e9b63921-9afb-55c6-a324-7827ccac9d56", 00:18:05.424 "is_configured": true, 00:18:05.424 "data_offset": 0, 00:18:05.424 "data_size": 65536 00:18:05.424 }, 00:18:05.424 { 00:18:05.424 "name": "BaseBdev4", 00:18:05.424 "uuid": "e2a1d893-f8e3-5f5a-9c6c-1c18921d6f35", 00:18:05.424 "is_configured": true, 00:18:05.424 "data_offset": 0, 00:18:05.424 "data_size": 65536 00:18:05.424 } 00:18:05.424 ] 00:18:05.424 }' 00:18:05.424 19:07:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.424 19:07:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.992 19:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:05.992 19:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.992 19:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:05.992 19:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:05.992 19:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.992 19:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.992 19:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.992 19:07:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.992 19:07:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.992 19:07:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.992 19:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.992 "name": "raid_bdev1", 00:18:05.992 "uuid": "dbaf363d-8437-4801-ad6f-e2f5e6cd2d08", 00:18:05.992 "strip_size_kb": 64, 00:18:05.992 "state": "online", 00:18:05.992 "raid_level": "raid5f", 00:18:05.992 "superblock": false, 00:18:05.992 "num_base_bdevs": 4, 00:18:05.992 "num_base_bdevs_discovered": 3, 00:18:05.992 "num_base_bdevs_operational": 3, 00:18:05.992 "base_bdevs_list": [ 00:18:05.992 { 00:18:05.992 "name": null, 00:18:05.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.992 "is_configured": false, 00:18:05.992 "data_offset": 0, 00:18:05.992 "data_size": 65536 00:18:05.992 }, 00:18:05.992 { 00:18:05.992 "name": "BaseBdev2", 00:18:05.992 "uuid": "31f6c354-d0db-54d6-a645-8d8328248f59", 00:18:05.992 "is_configured": true, 00:18:05.992 "data_offset": 0, 00:18:05.992 "data_size": 65536 00:18:05.992 }, 00:18:05.992 { 00:18:05.992 "name": "BaseBdev3", 00:18:05.992 "uuid": "e9b63921-9afb-55c6-a324-7827ccac9d56", 00:18:05.992 "is_configured": true, 00:18:05.992 "data_offset": 0, 00:18:05.992 "data_size": 65536 00:18:05.992 }, 00:18:05.992 { 00:18:05.992 "name": "BaseBdev4", 00:18:05.992 "uuid": "e2a1d893-f8e3-5f5a-9c6c-1c18921d6f35", 00:18:05.992 "is_configured": true, 00:18:05.992 "data_offset": 0, 00:18:05.992 "data_size": 65536 00:18:05.992 } 00:18:05.992 ] 00:18:05.992 }' 00:18:05.992 19:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.992 19:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:05.992 19:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.992 19:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:05.992 19:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:05.992 19:07:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.992 19:07:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.992 [2024-11-26 19:07:32.585352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:05.992 [2024-11-26 19:07:32.600166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:18:05.992 19:07:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.992 19:07:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:05.992 [2024-11-26 19:07:32.609489] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:07.370 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.370 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.370 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.370 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.370 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.370 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.371 "name": "raid_bdev1", 00:18:07.371 "uuid": "dbaf363d-8437-4801-ad6f-e2f5e6cd2d08", 00:18:07.371 "strip_size_kb": 64, 00:18:07.371 "state": "online", 00:18:07.371 "raid_level": "raid5f", 00:18:07.371 "superblock": false, 00:18:07.371 "num_base_bdevs": 4, 00:18:07.371 "num_base_bdevs_discovered": 4, 00:18:07.371 "num_base_bdevs_operational": 4, 00:18:07.371 "process": { 00:18:07.371 "type": "rebuild", 00:18:07.371 "target": "spare", 00:18:07.371 "progress": { 00:18:07.371 "blocks": 17280, 00:18:07.371 "percent": 8 00:18:07.371 } 00:18:07.371 }, 00:18:07.371 "base_bdevs_list": [ 00:18:07.371 { 00:18:07.371 "name": "spare", 00:18:07.371 "uuid": "c20c68d7-9748-58e3-beec-89c68308abeb", 00:18:07.371 "is_configured": true, 00:18:07.371 "data_offset": 0, 00:18:07.371 "data_size": 65536 00:18:07.371 }, 00:18:07.371 { 00:18:07.371 "name": "BaseBdev2", 00:18:07.371 "uuid": "31f6c354-d0db-54d6-a645-8d8328248f59", 00:18:07.371 "is_configured": true, 00:18:07.371 "data_offset": 0, 00:18:07.371 "data_size": 65536 00:18:07.371 }, 00:18:07.371 { 00:18:07.371 "name": "BaseBdev3", 00:18:07.371 "uuid": "e9b63921-9afb-55c6-a324-7827ccac9d56", 00:18:07.371 "is_configured": true, 00:18:07.371 "data_offset": 0, 00:18:07.371 "data_size": 65536 00:18:07.371 }, 00:18:07.371 { 00:18:07.371 "name": "BaseBdev4", 00:18:07.371 "uuid": "e2a1d893-f8e3-5f5a-9c6c-1c18921d6f35", 00:18:07.371 "is_configured": true, 00:18:07.371 "data_offset": 0, 00:18:07.371 "data_size": 65536 00:18:07.371 } 00:18:07.371 ] 00:18:07.371 }' 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=691 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.371 "name": "raid_bdev1", 00:18:07.371 "uuid": "dbaf363d-8437-4801-ad6f-e2f5e6cd2d08", 00:18:07.371 "strip_size_kb": 64, 00:18:07.371 "state": "online", 00:18:07.371 "raid_level": "raid5f", 00:18:07.371 "superblock": false, 00:18:07.371 "num_base_bdevs": 4, 00:18:07.371 "num_base_bdevs_discovered": 4, 00:18:07.371 "num_base_bdevs_operational": 4, 00:18:07.371 "process": { 00:18:07.371 "type": "rebuild", 00:18:07.371 "target": "spare", 00:18:07.371 "progress": { 00:18:07.371 "blocks": 21120, 00:18:07.371 "percent": 10 00:18:07.371 } 00:18:07.371 }, 00:18:07.371 "base_bdevs_list": [ 00:18:07.371 { 00:18:07.371 "name": "spare", 00:18:07.371 "uuid": "c20c68d7-9748-58e3-beec-89c68308abeb", 00:18:07.371 "is_configured": true, 00:18:07.371 "data_offset": 0, 00:18:07.371 "data_size": 65536 00:18:07.371 }, 00:18:07.371 { 00:18:07.371 "name": "BaseBdev2", 00:18:07.371 "uuid": "31f6c354-d0db-54d6-a645-8d8328248f59", 00:18:07.371 "is_configured": true, 00:18:07.371 "data_offset": 0, 00:18:07.371 "data_size": 65536 00:18:07.371 }, 00:18:07.371 { 00:18:07.371 "name": "BaseBdev3", 00:18:07.371 "uuid": "e9b63921-9afb-55c6-a324-7827ccac9d56", 00:18:07.371 "is_configured": true, 00:18:07.371 "data_offset": 0, 00:18:07.371 "data_size": 65536 00:18:07.371 }, 00:18:07.371 { 00:18:07.371 "name": "BaseBdev4", 00:18:07.371 "uuid": "e2a1d893-f8e3-5f5a-9c6c-1c18921d6f35", 00:18:07.371 "is_configured": true, 00:18:07.371 "data_offset": 0, 00:18:07.371 "data_size": 65536 00:18:07.371 } 00:18:07.371 ] 00:18:07.371 }' 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:07.371 19:07:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:08.749 19:07:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:08.749 19:07:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.749 19:07:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.749 19:07:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.749 19:07:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.749 19:07:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.749 19:07:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.749 19:07:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.749 19:07:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.749 19:07:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.749 19:07:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.749 19:07:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.749 "name": "raid_bdev1", 00:18:08.749 "uuid": "dbaf363d-8437-4801-ad6f-e2f5e6cd2d08", 00:18:08.749 "strip_size_kb": 64, 00:18:08.749 "state": "online", 00:18:08.749 "raid_level": "raid5f", 00:18:08.749 "superblock": false, 00:18:08.749 "num_base_bdevs": 4, 00:18:08.749 "num_base_bdevs_discovered": 4, 00:18:08.749 "num_base_bdevs_operational": 4, 00:18:08.749 "process": { 00:18:08.749 "type": "rebuild", 00:18:08.749 "target": "spare", 00:18:08.749 "progress": { 00:18:08.749 "blocks": 44160, 00:18:08.749 "percent": 22 00:18:08.749 } 00:18:08.749 }, 00:18:08.749 "base_bdevs_list": [ 00:18:08.749 { 00:18:08.749 "name": "spare", 00:18:08.749 "uuid": "c20c68d7-9748-58e3-beec-89c68308abeb", 00:18:08.749 "is_configured": true, 00:18:08.749 "data_offset": 0, 00:18:08.749 "data_size": 65536 00:18:08.749 }, 00:18:08.749 { 00:18:08.749 "name": "BaseBdev2", 00:18:08.749 "uuid": "31f6c354-d0db-54d6-a645-8d8328248f59", 00:18:08.749 "is_configured": true, 00:18:08.749 "data_offset": 0, 00:18:08.749 "data_size": 65536 00:18:08.749 }, 00:18:08.749 { 00:18:08.749 "name": "BaseBdev3", 00:18:08.749 "uuid": "e9b63921-9afb-55c6-a324-7827ccac9d56", 00:18:08.749 "is_configured": true, 00:18:08.749 "data_offset": 0, 00:18:08.749 "data_size": 65536 00:18:08.749 }, 00:18:08.749 { 00:18:08.749 "name": "BaseBdev4", 00:18:08.749 "uuid": "e2a1d893-f8e3-5f5a-9c6c-1c18921d6f35", 00:18:08.749 "is_configured": true, 00:18:08.749 "data_offset": 0, 00:18:08.749 "data_size": 65536 00:18:08.749 } 00:18:08.749 ] 00:18:08.749 }' 00:18:08.749 19:07:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.749 19:07:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.749 19:07:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.749 19:07:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.749 19:07:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:09.687 19:07:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:09.687 19:07:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.687 19:07:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.687 19:07:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.687 19:07:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.687 19:07:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.687 19:07:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.687 19:07:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.687 19:07:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.687 19:07:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.687 19:07:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.687 19:07:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.687 "name": "raid_bdev1", 00:18:09.687 "uuid": "dbaf363d-8437-4801-ad6f-e2f5e6cd2d08", 00:18:09.687 "strip_size_kb": 64, 00:18:09.687 "state": "online", 00:18:09.687 "raid_level": "raid5f", 00:18:09.687 "superblock": false, 00:18:09.687 "num_base_bdevs": 4, 00:18:09.687 "num_base_bdevs_discovered": 4, 00:18:09.687 "num_base_bdevs_operational": 4, 00:18:09.687 "process": { 00:18:09.687 "type": "rebuild", 00:18:09.687 "target": "spare", 00:18:09.687 "progress": { 00:18:09.687 "blocks": 65280, 00:18:09.687 "percent": 33 00:18:09.687 } 00:18:09.687 }, 00:18:09.687 "base_bdevs_list": [ 00:18:09.687 { 00:18:09.687 "name": "spare", 00:18:09.687 "uuid": "c20c68d7-9748-58e3-beec-89c68308abeb", 00:18:09.687 "is_configured": true, 00:18:09.687 "data_offset": 0, 00:18:09.687 "data_size": 65536 00:18:09.687 }, 00:18:09.687 { 00:18:09.687 "name": "BaseBdev2", 00:18:09.687 "uuid": "31f6c354-d0db-54d6-a645-8d8328248f59", 00:18:09.687 "is_configured": true, 00:18:09.687 "data_offset": 0, 00:18:09.687 "data_size": 65536 00:18:09.687 }, 00:18:09.687 { 00:18:09.687 "name": "BaseBdev3", 00:18:09.687 "uuid": "e9b63921-9afb-55c6-a324-7827ccac9d56", 00:18:09.687 "is_configured": true, 00:18:09.687 "data_offset": 0, 00:18:09.687 "data_size": 65536 00:18:09.687 }, 00:18:09.687 { 00:18:09.687 "name": "BaseBdev4", 00:18:09.687 "uuid": "e2a1d893-f8e3-5f5a-9c6c-1c18921d6f35", 00:18:09.687 "is_configured": true, 00:18:09.687 "data_offset": 0, 00:18:09.687 "data_size": 65536 00:18:09.687 } 00:18:09.687 ] 00:18:09.687 }' 00:18:09.687 19:07:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.687 19:07:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.687 19:07:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.687 19:07:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.687 19:07:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:11.066 19:07:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:11.066 19:07:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.066 19:07:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.066 19:07:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.066 19:07:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.066 19:07:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.066 19:07:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.066 19:07:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.066 19:07:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.066 19:07:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.066 19:07:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.066 19:07:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.066 "name": "raid_bdev1", 00:18:11.066 "uuid": "dbaf363d-8437-4801-ad6f-e2f5e6cd2d08", 00:18:11.066 "strip_size_kb": 64, 00:18:11.066 "state": "online", 00:18:11.066 "raid_level": "raid5f", 00:18:11.066 "superblock": false, 00:18:11.066 "num_base_bdevs": 4, 00:18:11.066 "num_base_bdevs_discovered": 4, 00:18:11.066 "num_base_bdevs_operational": 4, 00:18:11.066 "process": { 00:18:11.066 "type": "rebuild", 00:18:11.066 "target": "spare", 00:18:11.066 "progress": { 00:18:11.066 "blocks": 88320, 00:18:11.066 "percent": 44 00:18:11.066 } 00:18:11.066 }, 00:18:11.066 "base_bdevs_list": [ 00:18:11.066 { 00:18:11.066 "name": "spare", 00:18:11.066 "uuid": "c20c68d7-9748-58e3-beec-89c68308abeb", 00:18:11.066 "is_configured": true, 00:18:11.066 "data_offset": 0, 00:18:11.066 "data_size": 65536 00:18:11.066 }, 00:18:11.066 { 00:18:11.066 "name": "BaseBdev2", 00:18:11.066 "uuid": "31f6c354-d0db-54d6-a645-8d8328248f59", 00:18:11.066 "is_configured": true, 00:18:11.066 "data_offset": 0, 00:18:11.066 "data_size": 65536 00:18:11.066 }, 00:18:11.066 { 00:18:11.066 "name": "BaseBdev3", 00:18:11.066 "uuid": "e9b63921-9afb-55c6-a324-7827ccac9d56", 00:18:11.066 "is_configured": true, 00:18:11.066 "data_offset": 0, 00:18:11.066 "data_size": 65536 00:18:11.066 }, 00:18:11.066 { 00:18:11.066 "name": "BaseBdev4", 00:18:11.066 "uuid": "e2a1d893-f8e3-5f5a-9c6c-1c18921d6f35", 00:18:11.066 "is_configured": true, 00:18:11.066 "data_offset": 0, 00:18:11.066 "data_size": 65536 00:18:11.066 } 00:18:11.066 ] 00:18:11.066 }' 00:18:11.066 19:07:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.066 19:07:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.066 19:07:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.066 19:07:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.066 19:07:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:12.004 19:07:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:12.004 19:07:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.004 19:07:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.004 19:07:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.004 19:07:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.004 19:07:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.004 19:07:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.004 19:07:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.004 19:07:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.004 19:07:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.004 19:07:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.004 19:07:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.004 "name": "raid_bdev1", 00:18:12.004 "uuid": "dbaf363d-8437-4801-ad6f-e2f5e6cd2d08", 00:18:12.004 "strip_size_kb": 64, 00:18:12.004 "state": "online", 00:18:12.004 "raid_level": "raid5f", 00:18:12.004 "superblock": false, 00:18:12.004 "num_base_bdevs": 4, 00:18:12.004 "num_base_bdevs_discovered": 4, 00:18:12.004 "num_base_bdevs_operational": 4, 00:18:12.005 "process": { 00:18:12.005 "type": "rebuild", 00:18:12.005 "target": "spare", 00:18:12.005 "progress": { 00:18:12.005 "blocks": 109440, 00:18:12.005 "percent": 55 00:18:12.005 } 00:18:12.005 }, 00:18:12.005 "base_bdevs_list": [ 00:18:12.005 { 00:18:12.005 "name": "spare", 00:18:12.005 "uuid": "c20c68d7-9748-58e3-beec-89c68308abeb", 00:18:12.005 "is_configured": true, 00:18:12.005 "data_offset": 0, 00:18:12.005 "data_size": 65536 00:18:12.005 }, 00:18:12.005 { 00:18:12.005 "name": "BaseBdev2", 00:18:12.005 "uuid": "31f6c354-d0db-54d6-a645-8d8328248f59", 00:18:12.005 "is_configured": true, 00:18:12.005 "data_offset": 0, 00:18:12.005 "data_size": 65536 00:18:12.005 }, 00:18:12.005 { 00:18:12.005 "name": "BaseBdev3", 00:18:12.005 "uuid": "e9b63921-9afb-55c6-a324-7827ccac9d56", 00:18:12.005 "is_configured": true, 00:18:12.005 "data_offset": 0, 00:18:12.005 "data_size": 65536 00:18:12.005 }, 00:18:12.005 { 00:18:12.005 "name": "BaseBdev4", 00:18:12.005 "uuid": "e2a1d893-f8e3-5f5a-9c6c-1c18921d6f35", 00:18:12.005 "is_configured": true, 00:18:12.005 "data_offset": 0, 00:18:12.005 "data_size": 65536 00:18:12.005 } 00:18:12.005 ] 00:18:12.005 }' 00:18:12.005 19:07:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.005 19:07:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.005 19:07:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.265 19:07:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.265 19:07:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:13.201 19:07:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:13.201 19:07:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.201 19:07:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.201 19:07:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.201 19:07:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.201 19:07:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.201 19:07:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.201 19:07:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.201 19:07:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.201 19:07:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.201 19:07:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.201 19:07:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.201 "name": "raid_bdev1", 00:18:13.201 "uuid": "dbaf363d-8437-4801-ad6f-e2f5e6cd2d08", 00:18:13.201 "strip_size_kb": 64, 00:18:13.201 "state": "online", 00:18:13.201 "raid_level": "raid5f", 00:18:13.201 "superblock": false, 00:18:13.201 "num_base_bdevs": 4, 00:18:13.201 "num_base_bdevs_discovered": 4, 00:18:13.201 "num_base_bdevs_operational": 4, 00:18:13.201 "process": { 00:18:13.201 "type": "rebuild", 00:18:13.201 "target": "spare", 00:18:13.201 "progress": { 00:18:13.201 "blocks": 132480, 00:18:13.201 "percent": 67 00:18:13.201 } 00:18:13.201 }, 00:18:13.201 "base_bdevs_list": [ 00:18:13.201 { 00:18:13.201 "name": "spare", 00:18:13.201 "uuid": "c20c68d7-9748-58e3-beec-89c68308abeb", 00:18:13.201 "is_configured": true, 00:18:13.201 "data_offset": 0, 00:18:13.201 "data_size": 65536 00:18:13.201 }, 00:18:13.201 { 00:18:13.201 "name": "BaseBdev2", 00:18:13.201 "uuid": "31f6c354-d0db-54d6-a645-8d8328248f59", 00:18:13.201 "is_configured": true, 00:18:13.201 "data_offset": 0, 00:18:13.201 "data_size": 65536 00:18:13.201 }, 00:18:13.201 { 00:18:13.201 "name": "BaseBdev3", 00:18:13.201 "uuid": "e9b63921-9afb-55c6-a324-7827ccac9d56", 00:18:13.201 "is_configured": true, 00:18:13.201 "data_offset": 0, 00:18:13.201 "data_size": 65536 00:18:13.201 }, 00:18:13.201 { 00:18:13.201 "name": "BaseBdev4", 00:18:13.201 "uuid": "e2a1d893-f8e3-5f5a-9c6c-1c18921d6f35", 00:18:13.201 "is_configured": true, 00:18:13.201 "data_offset": 0, 00:18:13.201 "data_size": 65536 00:18:13.201 } 00:18:13.201 ] 00:18:13.201 }' 00:18:13.201 19:07:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.201 19:07:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.201 19:07:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.201 19:07:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.201 19:07:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:14.578 19:07:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:14.578 19:07:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.578 19:07:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.578 19:07:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.578 19:07:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.578 19:07:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.578 19:07:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.578 19:07:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.578 19:07:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.578 19:07:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.578 19:07:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.578 19:07:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.578 "name": "raid_bdev1", 00:18:14.578 "uuid": "dbaf363d-8437-4801-ad6f-e2f5e6cd2d08", 00:18:14.578 "strip_size_kb": 64, 00:18:14.578 "state": "online", 00:18:14.578 "raid_level": "raid5f", 00:18:14.578 "superblock": false, 00:18:14.578 "num_base_bdevs": 4, 00:18:14.578 "num_base_bdevs_discovered": 4, 00:18:14.578 "num_base_bdevs_operational": 4, 00:18:14.578 "process": { 00:18:14.578 "type": "rebuild", 00:18:14.578 "target": "spare", 00:18:14.578 "progress": { 00:18:14.578 "blocks": 153600, 00:18:14.578 "percent": 78 00:18:14.578 } 00:18:14.578 }, 00:18:14.578 "base_bdevs_list": [ 00:18:14.578 { 00:18:14.579 "name": "spare", 00:18:14.579 "uuid": "c20c68d7-9748-58e3-beec-89c68308abeb", 00:18:14.579 "is_configured": true, 00:18:14.579 "data_offset": 0, 00:18:14.579 "data_size": 65536 00:18:14.579 }, 00:18:14.579 { 00:18:14.579 "name": "BaseBdev2", 00:18:14.579 "uuid": "31f6c354-d0db-54d6-a645-8d8328248f59", 00:18:14.579 "is_configured": true, 00:18:14.579 "data_offset": 0, 00:18:14.579 "data_size": 65536 00:18:14.579 }, 00:18:14.579 { 00:18:14.579 "name": "BaseBdev3", 00:18:14.579 "uuid": "e9b63921-9afb-55c6-a324-7827ccac9d56", 00:18:14.579 "is_configured": true, 00:18:14.579 "data_offset": 0, 00:18:14.579 "data_size": 65536 00:18:14.579 }, 00:18:14.579 { 00:18:14.579 "name": "BaseBdev4", 00:18:14.579 "uuid": "e2a1d893-f8e3-5f5a-9c6c-1c18921d6f35", 00:18:14.579 "is_configured": true, 00:18:14.579 "data_offset": 0, 00:18:14.579 "data_size": 65536 00:18:14.579 } 00:18:14.579 ] 00:18:14.579 }' 00:18:14.579 19:07:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.579 19:07:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.579 19:07:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.579 19:07:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.579 19:07:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:15.525 19:07:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:15.525 19:07:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.525 19:07:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.525 19:07:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.525 19:07:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.525 19:07:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.525 19:07:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.525 19:07:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.525 19:07:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.525 19:07:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.525 19:07:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.525 19:07:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.525 "name": "raid_bdev1", 00:18:15.525 "uuid": "dbaf363d-8437-4801-ad6f-e2f5e6cd2d08", 00:18:15.525 "strip_size_kb": 64, 00:18:15.525 "state": "online", 00:18:15.525 "raid_level": "raid5f", 00:18:15.525 "superblock": false, 00:18:15.525 "num_base_bdevs": 4, 00:18:15.525 "num_base_bdevs_discovered": 4, 00:18:15.525 "num_base_bdevs_operational": 4, 00:18:15.525 "process": { 00:18:15.525 "type": "rebuild", 00:18:15.525 "target": "spare", 00:18:15.525 "progress": { 00:18:15.525 "blocks": 176640, 00:18:15.525 "percent": 89 00:18:15.525 } 00:18:15.525 }, 00:18:15.525 "base_bdevs_list": [ 00:18:15.525 { 00:18:15.525 "name": "spare", 00:18:15.525 "uuid": "c20c68d7-9748-58e3-beec-89c68308abeb", 00:18:15.525 "is_configured": true, 00:18:15.525 "data_offset": 0, 00:18:15.525 "data_size": 65536 00:18:15.525 }, 00:18:15.525 { 00:18:15.525 "name": "BaseBdev2", 00:18:15.525 "uuid": "31f6c354-d0db-54d6-a645-8d8328248f59", 00:18:15.525 "is_configured": true, 00:18:15.525 "data_offset": 0, 00:18:15.525 "data_size": 65536 00:18:15.525 }, 00:18:15.525 { 00:18:15.525 "name": "BaseBdev3", 00:18:15.525 "uuid": "e9b63921-9afb-55c6-a324-7827ccac9d56", 00:18:15.525 "is_configured": true, 00:18:15.525 "data_offset": 0, 00:18:15.525 "data_size": 65536 00:18:15.525 }, 00:18:15.525 { 00:18:15.525 "name": "BaseBdev4", 00:18:15.525 "uuid": "e2a1d893-f8e3-5f5a-9c6c-1c18921d6f35", 00:18:15.525 "is_configured": true, 00:18:15.525 "data_offset": 0, 00:18:15.525 "data_size": 65536 00:18:15.525 } 00:18:15.525 ] 00:18:15.525 }' 00:18:15.525 19:07:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.525 19:07:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.525 19:07:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.525 19:07:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.526 19:07:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:16.461 [2024-11-26 19:07:43.050605] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:16.461 [2024-11-26 19:07:43.050990] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:16.461 [2024-11-26 19:07:43.051079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.720 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:16.720 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.720 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.720 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.720 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.720 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.720 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.720 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.720 19:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.720 19:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.720 19:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.720 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.720 "name": "raid_bdev1", 00:18:16.720 "uuid": "dbaf363d-8437-4801-ad6f-e2f5e6cd2d08", 00:18:16.720 "strip_size_kb": 64, 00:18:16.720 "state": "online", 00:18:16.720 "raid_level": "raid5f", 00:18:16.720 "superblock": false, 00:18:16.720 "num_base_bdevs": 4, 00:18:16.720 "num_base_bdevs_discovered": 4, 00:18:16.720 "num_base_bdevs_operational": 4, 00:18:16.720 "base_bdevs_list": [ 00:18:16.720 { 00:18:16.720 "name": "spare", 00:18:16.720 "uuid": "c20c68d7-9748-58e3-beec-89c68308abeb", 00:18:16.720 "is_configured": true, 00:18:16.720 "data_offset": 0, 00:18:16.720 "data_size": 65536 00:18:16.720 }, 00:18:16.720 { 00:18:16.720 "name": "BaseBdev2", 00:18:16.720 "uuid": "31f6c354-d0db-54d6-a645-8d8328248f59", 00:18:16.720 "is_configured": true, 00:18:16.720 "data_offset": 0, 00:18:16.720 "data_size": 65536 00:18:16.720 }, 00:18:16.720 { 00:18:16.720 "name": "BaseBdev3", 00:18:16.720 "uuid": "e9b63921-9afb-55c6-a324-7827ccac9d56", 00:18:16.720 "is_configured": true, 00:18:16.720 "data_offset": 0, 00:18:16.721 "data_size": 65536 00:18:16.721 }, 00:18:16.721 { 00:18:16.721 "name": "BaseBdev4", 00:18:16.721 "uuid": "e2a1d893-f8e3-5f5a-9c6c-1c18921d6f35", 00:18:16.721 "is_configured": true, 00:18:16.721 "data_offset": 0, 00:18:16.721 "data_size": 65536 00:18:16.721 } 00:18:16.721 ] 00:18:16.721 }' 00:18:16.721 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.721 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:16.721 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.721 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:16.721 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:16.721 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:16.721 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.721 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:16.721 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:16.721 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.721 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.721 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.721 19:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.721 19:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.721 19:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.980 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.980 "name": "raid_bdev1", 00:18:16.980 "uuid": "dbaf363d-8437-4801-ad6f-e2f5e6cd2d08", 00:18:16.980 "strip_size_kb": 64, 00:18:16.980 "state": "online", 00:18:16.980 "raid_level": "raid5f", 00:18:16.980 "superblock": false, 00:18:16.980 "num_base_bdevs": 4, 00:18:16.980 "num_base_bdevs_discovered": 4, 00:18:16.980 "num_base_bdevs_operational": 4, 00:18:16.980 "base_bdevs_list": [ 00:18:16.980 { 00:18:16.980 "name": "spare", 00:18:16.980 "uuid": "c20c68d7-9748-58e3-beec-89c68308abeb", 00:18:16.980 "is_configured": true, 00:18:16.980 "data_offset": 0, 00:18:16.980 "data_size": 65536 00:18:16.980 }, 00:18:16.980 { 00:18:16.980 "name": "BaseBdev2", 00:18:16.980 "uuid": "31f6c354-d0db-54d6-a645-8d8328248f59", 00:18:16.980 "is_configured": true, 00:18:16.980 "data_offset": 0, 00:18:16.980 "data_size": 65536 00:18:16.980 }, 00:18:16.980 { 00:18:16.980 "name": "BaseBdev3", 00:18:16.980 "uuid": "e9b63921-9afb-55c6-a324-7827ccac9d56", 00:18:16.980 "is_configured": true, 00:18:16.980 "data_offset": 0, 00:18:16.980 "data_size": 65536 00:18:16.980 }, 00:18:16.980 { 00:18:16.980 "name": "BaseBdev4", 00:18:16.980 "uuid": "e2a1d893-f8e3-5f5a-9c6c-1c18921d6f35", 00:18:16.980 "is_configured": true, 00:18:16.980 "data_offset": 0, 00:18:16.980 "data_size": 65536 00:18:16.980 } 00:18:16.980 ] 00:18:16.980 }' 00:18:16.981 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.981 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:16.981 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.981 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:16.981 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:16.981 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.981 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.981 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:16.981 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:16.981 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:16.981 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.981 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.981 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.981 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.981 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.981 19:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.981 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.981 19:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.981 19:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.981 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.981 "name": "raid_bdev1", 00:18:16.981 "uuid": "dbaf363d-8437-4801-ad6f-e2f5e6cd2d08", 00:18:16.981 "strip_size_kb": 64, 00:18:16.981 "state": "online", 00:18:16.981 "raid_level": "raid5f", 00:18:16.981 "superblock": false, 00:18:16.981 "num_base_bdevs": 4, 00:18:16.981 "num_base_bdevs_discovered": 4, 00:18:16.981 "num_base_bdevs_operational": 4, 00:18:16.981 "base_bdevs_list": [ 00:18:16.981 { 00:18:16.981 "name": "spare", 00:18:16.981 "uuid": "c20c68d7-9748-58e3-beec-89c68308abeb", 00:18:16.981 "is_configured": true, 00:18:16.981 "data_offset": 0, 00:18:16.981 "data_size": 65536 00:18:16.981 }, 00:18:16.981 { 00:18:16.981 "name": "BaseBdev2", 00:18:16.981 "uuid": "31f6c354-d0db-54d6-a645-8d8328248f59", 00:18:16.981 "is_configured": true, 00:18:16.981 "data_offset": 0, 00:18:16.981 "data_size": 65536 00:18:16.981 }, 00:18:16.981 { 00:18:16.981 "name": "BaseBdev3", 00:18:16.981 "uuid": "e9b63921-9afb-55c6-a324-7827ccac9d56", 00:18:16.981 "is_configured": true, 00:18:16.981 "data_offset": 0, 00:18:16.981 "data_size": 65536 00:18:16.981 }, 00:18:16.981 { 00:18:16.981 "name": "BaseBdev4", 00:18:16.981 "uuid": "e2a1d893-f8e3-5f5a-9c6c-1c18921d6f35", 00:18:16.981 "is_configured": true, 00:18:16.981 "data_offset": 0, 00:18:16.981 "data_size": 65536 00:18:16.981 } 00:18:16.981 ] 00:18:16.981 }' 00:18:16.981 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.981 19:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.550 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:17.550 19:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.550 19:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.550 [2024-11-26 19:07:43.970268] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:17.550 [2024-11-26 19:07:43.970467] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:17.550 [2024-11-26 19:07:43.970773] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.550 [2024-11-26 19:07:43.971039] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.550 [2024-11-26 19:07:43.971193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:17.550 19:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.550 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.550 19:07:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:17.550 19:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.550 19:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.550 19:07:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.550 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:17.550 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:17.550 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:17.550 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:17.550 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:17.550 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:17.550 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:17.550 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:17.550 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:17.550 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:17.550 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:17.550 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:17.550 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:17.808 /dev/nbd0 00:18:17.808 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:17.808 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:17.809 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:17.809 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:17.809 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:17.809 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:17.809 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:17.809 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:17.809 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:17.809 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:17.809 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:17.809 1+0 records in 00:18:17.809 1+0 records out 00:18:17.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608769 s, 6.7 MB/s 00:18:17.809 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.809 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:17.809 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:17.809 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:17.809 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:17.809 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:17.809 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:17.809 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:18.375 /dev/nbd1 00:18:18.375 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:18.375 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:18.375 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:18.375 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:18.375 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:18.375 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:18.375 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:18.375 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:18.375 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:18.375 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:18.375 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:18.375 1+0 records in 00:18:18.375 1+0 records out 00:18:18.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598197 s, 6.8 MB/s 00:18:18.375 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.375 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:18.375 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.376 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:18.376 19:07:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:18.376 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:18.376 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:18.376 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:18.376 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:18.376 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:18.376 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:18.376 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:18.376 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:18.376 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:18.376 19:07:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:18.944 19:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:18.944 19:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:18.944 19:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:18.944 19:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:18.944 19:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:18.944 19:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:18.944 19:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:18.944 19:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:18.944 19:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:18.944 19:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:19.203 19:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:19.203 19:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:19.203 19:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:19.203 19:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:19.203 19:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:19.203 19:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:19.203 19:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:19.203 19:07:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:19.203 19:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:19.203 19:07:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85528 00:18:19.203 19:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85528 ']' 00:18:19.203 19:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85528 00:18:19.203 19:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:18:19.203 19:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.203 19:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85528 00:18:19.203 killing process with pid 85528 00:18:19.203 Received shutdown signal, test time was about 60.000000 seconds 00:18:19.203 00:18:19.203 Latency(us) 00:18:19.203 [2024-11-26T19:07:45.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.203 [2024-11-26T19:07:45.826Z] =================================================================================================================== 00:18:19.203 [2024-11-26T19:07:45.826Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:19.203 19:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:19.203 19:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:19.203 19:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85528' 00:18:19.203 19:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85528 00:18:19.203 [2024-11-26 19:07:45.634203] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:19.203 19:07:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85528 00:18:19.775 [2024-11-26 19:07:46.151380] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:21.151 ************************************ 00:18:21.151 END TEST raid5f_rebuild_test 00:18:21.151 ************************************ 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:21.151 00:18:21.151 real 0m20.994s 00:18:21.151 user 0m26.151s 00:18:21.151 sys 0m2.561s 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.151 19:07:47 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:18:21.151 19:07:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:21.151 19:07:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:21.151 19:07:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:21.151 ************************************ 00:18:21.151 START TEST raid5f_rebuild_test_sb 00:18:21.151 ************************************ 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86043 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86043 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 86043 ']' 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.151 19:07:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.151 [2024-11-26 19:07:47.600061] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:18:21.151 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:21.151 Zero copy mechanism will not be used. 00:18:21.151 [2024-11-26 19:07:47.600535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86043 ] 00:18:21.411 [2024-11-26 19:07:47.784078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.411 [2024-11-26 19:07:47.955157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.669 [2024-11-26 19:07:48.207613] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:21.669 [2024-11-26 19:07:48.207708] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.237 BaseBdev1_malloc 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.237 [2024-11-26 19:07:48.673237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:22.237 [2024-11-26 19:07:48.673377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.237 [2024-11-26 19:07:48.673412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:22.237 [2024-11-26 19:07:48.673432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.237 [2024-11-26 19:07:48.676651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.237 [2024-11-26 19:07:48.676727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:22.237 BaseBdev1 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.237 BaseBdev2_malloc 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.237 [2024-11-26 19:07:48.731862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:22.237 [2024-11-26 19:07:48.731976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.237 [2024-11-26 19:07:48.732010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:22.237 [2024-11-26 19:07:48.732030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.237 [2024-11-26 19:07:48.735252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.237 [2024-11-26 19:07:48.735355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:22.237 BaseBdev2 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.237 BaseBdev3_malloc 00:18:22.237 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.238 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:22.238 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.238 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.238 [2024-11-26 19:07:48.806731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:22.238 [2024-11-26 19:07:48.806908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.238 [2024-11-26 19:07:48.806969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:22.238 [2024-11-26 19:07:48.807006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.238 [2024-11-26 19:07:48.811123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.238 [2024-11-26 19:07:48.811192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:22.238 BaseBdev3 00:18:22.238 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.238 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:22.238 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:22.238 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.238 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.497 BaseBdev4_malloc 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.497 [2024-11-26 19:07:48.865830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:22.497 [2024-11-26 19:07:48.865995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.497 [2024-11-26 19:07:48.866034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:22.497 [2024-11-26 19:07:48.866054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.497 [2024-11-26 19:07:48.869664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.497 [2024-11-26 19:07:48.869717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:22.497 BaseBdev4 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.497 spare_malloc 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.497 spare_delay 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.497 [2024-11-26 19:07:48.942110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:22.497 [2024-11-26 19:07:48.942188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.497 [2024-11-26 19:07:48.942222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:22.497 [2024-11-26 19:07:48.942242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.497 [2024-11-26 19:07:48.945514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.497 [2024-11-26 19:07:48.945560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:22.497 spare 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.497 [2024-11-26 19:07:48.954369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:22.497 [2024-11-26 19:07:48.957216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:22.497 [2024-11-26 19:07:48.957333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:22.497 [2024-11-26 19:07:48.957420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:22.497 [2024-11-26 19:07:48.957739] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:22.497 [2024-11-26 19:07:48.957763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:22.497 [2024-11-26 19:07:48.958153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:22.497 [2024-11-26 19:07:48.965523] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:22.497 [2024-11-26 19:07:48.965560] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:22.497 [2024-11-26 19:07:48.965893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.497 19:07:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.497 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.497 "name": "raid_bdev1", 00:18:22.497 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:22.497 "strip_size_kb": 64, 00:18:22.497 "state": "online", 00:18:22.497 "raid_level": "raid5f", 00:18:22.497 "superblock": true, 00:18:22.497 "num_base_bdevs": 4, 00:18:22.497 "num_base_bdevs_discovered": 4, 00:18:22.497 "num_base_bdevs_operational": 4, 00:18:22.497 "base_bdevs_list": [ 00:18:22.497 { 00:18:22.497 "name": "BaseBdev1", 00:18:22.497 "uuid": "576a4acf-9ea4-524a-85a9-6cf994af5052", 00:18:22.497 "is_configured": true, 00:18:22.497 "data_offset": 2048, 00:18:22.497 "data_size": 63488 00:18:22.497 }, 00:18:22.497 { 00:18:22.497 "name": "BaseBdev2", 00:18:22.497 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:22.497 "is_configured": true, 00:18:22.497 "data_offset": 2048, 00:18:22.497 "data_size": 63488 00:18:22.497 }, 00:18:22.497 { 00:18:22.497 "name": "BaseBdev3", 00:18:22.497 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:22.497 "is_configured": true, 00:18:22.497 "data_offset": 2048, 00:18:22.497 "data_size": 63488 00:18:22.497 }, 00:18:22.497 { 00:18:22.497 "name": "BaseBdev4", 00:18:22.497 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:22.497 "is_configured": true, 00:18:22.497 "data_offset": 2048, 00:18:22.497 "data_size": 63488 00:18:22.497 } 00:18:22.497 ] 00:18:22.498 }' 00:18:22.498 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.498 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:23.068 [2024-11-26 19:07:49.506803] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:23.068 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:23.327 [2024-11-26 19:07:49.926706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:23.586 /dev/nbd0 00:18:23.586 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:23.586 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:23.586 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:23.586 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:23.586 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:23.586 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:23.586 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:23.586 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:23.586 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:23.586 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:23.586 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:23.586 1+0 records in 00:18:23.586 1+0 records out 00:18:23.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402409 s, 10.2 MB/s 00:18:23.586 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:23.586 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:23.586 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:23.586 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:23.586 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:23.586 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:23.586 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:23.586 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:23.586 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:23.586 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:23.586 19:07:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:18:24.154 496+0 records in 00:18:24.154 496+0 records out 00:18:24.154 97517568 bytes (98 MB, 93 MiB) copied, 0.693103 s, 141 MB/s 00:18:24.154 19:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:24.154 19:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:24.154 19:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:24.154 19:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:24.154 19:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:24.154 19:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:24.154 19:07:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:24.420 [2024-11-26 19:07:51.031109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.420 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.691 [2024-11-26 19:07:51.049489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.691 "name": "raid_bdev1", 00:18:24.691 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:24.691 "strip_size_kb": 64, 00:18:24.691 "state": "online", 00:18:24.691 "raid_level": "raid5f", 00:18:24.691 "superblock": true, 00:18:24.691 "num_base_bdevs": 4, 00:18:24.691 "num_base_bdevs_discovered": 3, 00:18:24.691 "num_base_bdevs_operational": 3, 00:18:24.691 "base_bdevs_list": [ 00:18:24.691 { 00:18:24.691 "name": null, 00:18:24.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.691 "is_configured": false, 00:18:24.691 "data_offset": 0, 00:18:24.691 "data_size": 63488 00:18:24.691 }, 00:18:24.691 { 00:18:24.691 "name": "BaseBdev2", 00:18:24.691 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:24.691 "is_configured": true, 00:18:24.691 "data_offset": 2048, 00:18:24.691 "data_size": 63488 00:18:24.691 }, 00:18:24.691 { 00:18:24.691 "name": "BaseBdev3", 00:18:24.691 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:24.691 "is_configured": true, 00:18:24.691 "data_offset": 2048, 00:18:24.691 "data_size": 63488 00:18:24.691 }, 00:18:24.691 { 00:18:24.691 "name": "BaseBdev4", 00:18:24.691 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:24.691 "is_configured": true, 00:18:24.691 "data_offset": 2048, 00:18:24.691 "data_size": 63488 00:18:24.691 } 00:18:24.691 ] 00:18:24.691 }' 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.691 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.949 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:24.950 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.950 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.950 [2024-11-26 19:07:51.565692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:25.208 [2024-11-26 19:07:51.582630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:18:25.208 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.208 19:07:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:25.208 [2024-11-26 19:07:51.596541] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:26.143 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:26.143 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.143 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:26.143 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:26.143 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.143 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.143 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.143 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.143 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.143 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.143 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.143 "name": "raid_bdev1", 00:18:26.143 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:26.143 "strip_size_kb": 64, 00:18:26.143 "state": "online", 00:18:26.143 "raid_level": "raid5f", 00:18:26.143 "superblock": true, 00:18:26.143 "num_base_bdevs": 4, 00:18:26.143 "num_base_bdevs_discovered": 4, 00:18:26.143 "num_base_bdevs_operational": 4, 00:18:26.143 "process": { 00:18:26.143 "type": "rebuild", 00:18:26.143 "target": "spare", 00:18:26.143 "progress": { 00:18:26.143 "blocks": 17280, 00:18:26.143 "percent": 9 00:18:26.143 } 00:18:26.143 }, 00:18:26.143 "base_bdevs_list": [ 00:18:26.143 { 00:18:26.143 "name": "spare", 00:18:26.143 "uuid": "eeb35dba-a8f3-5fff-bae0-2d68ebb87326", 00:18:26.143 "is_configured": true, 00:18:26.143 "data_offset": 2048, 00:18:26.143 "data_size": 63488 00:18:26.143 }, 00:18:26.143 { 00:18:26.143 "name": "BaseBdev2", 00:18:26.143 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:26.143 "is_configured": true, 00:18:26.143 "data_offset": 2048, 00:18:26.143 "data_size": 63488 00:18:26.143 }, 00:18:26.143 { 00:18:26.143 "name": "BaseBdev3", 00:18:26.143 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:26.143 "is_configured": true, 00:18:26.143 "data_offset": 2048, 00:18:26.143 "data_size": 63488 00:18:26.143 }, 00:18:26.143 { 00:18:26.143 "name": "BaseBdev4", 00:18:26.143 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:26.143 "is_configured": true, 00:18:26.143 "data_offset": 2048, 00:18:26.143 "data_size": 63488 00:18:26.143 } 00:18:26.143 ] 00:18:26.143 }' 00:18:26.143 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.143 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:26.143 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.143 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:26.143 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:26.143 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.143 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.143 [2024-11-26 19:07:52.758258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:26.402 [2024-11-26 19:07:52.814263] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:26.402 [2024-11-26 19:07:52.814660] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.402 [2024-11-26 19:07:52.814822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:26.402 [2024-11-26 19:07:52.814884] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:26.402 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.402 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:26.402 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.402 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.402 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:26.402 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:26.402 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:26.402 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.402 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.402 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.402 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.402 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.402 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.402 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.402 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.402 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.402 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.402 "name": "raid_bdev1", 00:18:26.402 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:26.402 "strip_size_kb": 64, 00:18:26.402 "state": "online", 00:18:26.402 "raid_level": "raid5f", 00:18:26.402 "superblock": true, 00:18:26.402 "num_base_bdevs": 4, 00:18:26.402 "num_base_bdevs_discovered": 3, 00:18:26.402 "num_base_bdevs_operational": 3, 00:18:26.402 "base_bdevs_list": [ 00:18:26.402 { 00:18:26.402 "name": null, 00:18:26.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.402 "is_configured": false, 00:18:26.402 "data_offset": 0, 00:18:26.402 "data_size": 63488 00:18:26.402 }, 00:18:26.402 { 00:18:26.402 "name": "BaseBdev2", 00:18:26.402 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:26.402 "is_configured": true, 00:18:26.402 "data_offset": 2048, 00:18:26.402 "data_size": 63488 00:18:26.402 }, 00:18:26.402 { 00:18:26.402 "name": "BaseBdev3", 00:18:26.402 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:26.402 "is_configured": true, 00:18:26.402 "data_offset": 2048, 00:18:26.402 "data_size": 63488 00:18:26.402 }, 00:18:26.402 { 00:18:26.402 "name": "BaseBdev4", 00:18:26.402 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:26.402 "is_configured": true, 00:18:26.402 "data_offset": 2048, 00:18:26.402 "data_size": 63488 00:18:26.402 } 00:18:26.402 ] 00:18:26.402 }' 00:18:26.402 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.402 19:07:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.969 19:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:26.969 19:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.969 19:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:26.969 19:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:26.969 19:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.969 19:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.969 19:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.969 19:07:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.969 19:07:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.969 19:07:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.969 19:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.969 "name": "raid_bdev1", 00:18:26.969 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:26.969 "strip_size_kb": 64, 00:18:26.969 "state": "online", 00:18:26.969 "raid_level": "raid5f", 00:18:26.969 "superblock": true, 00:18:26.969 "num_base_bdevs": 4, 00:18:26.969 "num_base_bdevs_discovered": 3, 00:18:26.969 "num_base_bdevs_operational": 3, 00:18:26.969 "base_bdevs_list": [ 00:18:26.969 { 00:18:26.969 "name": null, 00:18:26.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.969 "is_configured": false, 00:18:26.969 "data_offset": 0, 00:18:26.969 "data_size": 63488 00:18:26.969 }, 00:18:26.969 { 00:18:26.969 "name": "BaseBdev2", 00:18:26.969 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:26.969 "is_configured": true, 00:18:26.969 "data_offset": 2048, 00:18:26.969 "data_size": 63488 00:18:26.969 }, 00:18:26.969 { 00:18:26.969 "name": "BaseBdev3", 00:18:26.969 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:26.969 "is_configured": true, 00:18:26.969 "data_offset": 2048, 00:18:26.969 "data_size": 63488 00:18:26.969 }, 00:18:26.969 { 00:18:26.969 "name": "BaseBdev4", 00:18:26.969 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:26.969 "is_configured": true, 00:18:26.969 "data_offset": 2048, 00:18:26.969 "data_size": 63488 00:18:26.969 } 00:18:26.969 ] 00:18:26.969 }' 00:18:26.969 19:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.969 19:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:26.969 19:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.969 19:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:26.969 19:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:26.969 19:07:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.969 19:07:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.969 [2024-11-26 19:07:53.551851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:26.969 [2024-11-26 19:07:53.567128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:18:26.969 19:07:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.969 19:07:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:26.969 [2024-11-26 19:07:53.576779] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:28.345 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.345 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.345 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.345 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.345 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.345 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.345 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.345 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.345 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.345 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.345 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.345 "name": "raid_bdev1", 00:18:28.345 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:28.345 "strip_size_kb": 64, 00:18:28.345 "state": "online", 00:18:28.345 "raid_level": "raid5f", 00:18:28.345 "superblock": true, 00:18:28.345 "num_base_bdevs": 4, 00:18:28.345 "num_base_bdevs_discovered": 4, 00:18:28.345 "num_base_bdevs_operational": 4, 00:18:28.345 "process": { 00:18:28.345 "type": "rebuild", 00:18:28.345 "target": "spare", 00:18:28.345 "progress": { 00:18:28.345 "blocks": 17280, 00:18:28.345 "percent": 9 00:18:28.345 } 00:18:28.345 }, 00:18:28.345 "base_bdevs_list": [ 00:18:28.345 { 00:18:28.345 "name": "spare", 00:18:28.345 "uuid": "eeb35dba-a8f3-5fff-bae0-2d68ebb87326", 00:18:28.345 "is_configured": true, 00:18:28.345 "data_offset": 2048, 00:18:28.345 "data_size": 63488 00:18:28.345 }, 00:18:28.345 { 00:18:28.345 "name": "BaseBdev2", 00:18:28.345 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:28.345 "is_configured": true, 00:18:28.345 "data_offset": 2048, 00:18:28.345 "data_size": 63488 00:18:28.345 }, 00:18:28.345 { 00:18:28.345 "name": "BaseBdev3", 00:18:28.345 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:28.346 "is_configured": true, 00:18:28.346 "data_offset": 2048, 00:18:28.346 "data_size": 63488 00:18:28.346 }, 00:18:28.346 { 00:18:28.346 "name": "BaseBdev4", 00:18:28.346 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:28.346 "is_configured": true, 00:18:28.346 "data_offset": 2048, 00:18:28.346 "data_size": 63488 00:18:28.346 } 00:18:28.346 ] 00:18:28.346 }' 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:28.346 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=712 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.346 "name": "raid_bdev1", 00:18:28.346 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:28.346 "strip_size_kb": 64, 00:18:28.346 "state": "online", 00:18:28.346 "raid_level": "raid5f", 00:18:28.346 "superblock": true, 00:18:28.346 "num_base_bdevs": 4, 00:18:28.346 "num_base_bdevs_discovered": 4, 00:18:28.346 "num_base_bdevs_operational": 4, 00:18:28.346 "process": { 00:18:28.346 "type": "rebuild", 00:18:28.346 "target": "spare", 00:18:28.346 "progress": { 00:18:28.346 "blocks": 21120, 00:18:28.346 "percent": 11 00:18:28.346 } 00:18:28.346 }, 00:18:28.346 "base_bdevs_list": [ 00:18:28.346 { 00:18:28.346 "name": "spare", 00:18:28.346 "uuid": "eeb35dba-a8f3-5fff-bae0-2d68ebb87326", 00:18:28.346 "is_configured": true, 00:18:28.346 "data_offset": 2048, 00:18:28.346 "data_size": 63488 00:18:28.346 }, 00:18:28.346 { 00:18:28.346 "name": "BaseBdev2", 00:18:28.346 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:28.346 "is_configured": true, 00:18:28.346 "data_offset": 2048, 00:18:28.346 "data_size": 63488 00:18:28.346 }, 00:18:28.346 { 00:18:28.346 "name": "BaseBdev3", 00:18:28.346 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:28.346 "is_configured": true, 00:18:28.346 "data_offset": 2048, 00:18:28.346 "data_size": 63488 00:18:28.346 }, 00:18:28.346 { 00:18:28.346 "name": "BaseBdev4", 00:18:28.346 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:28.346 "is_configured": true, 00:18:28.346 "data_offset": 2048, 00:18:28.346 "data_size": 63488 00:18:28.346 } 00:18:28.346 ] 00:18:28.346 }' 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.346 19:07:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:29.282 19:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:29.282 19:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:29.282 19:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:29.282 19:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:29.282 19:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:29.282 19:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:29.541 19:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.541 19:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.541 19:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.541 19:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.541 19:07:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.541 19:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.541 "name": "raid_bdev1", 00:18:29.541 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:29.541 "strip_size_kb": 64, 00:18:29.541 "state": "online", 00:18:29.541 "raid_level": "raid5f", 00:18:29.541 "superblock": true, 00:18:29.541 "num_base_bdevs": 4, 00:18:29.541 "num_base_bdevs_discovered": 4, 00:18:29.541 "num_base_bdevs_operational": 4, 00:18:29.541 "process": { 00:18:29.541 "type": "rebuild", 00:18:29.541 "target": "spare", 00:18:29.541 "progress": { 00:18:29.541 "blocks": 44160, 00:18:29.541 "percent": 23 00:18:29.541 } 00:18:29.541 }, 00:18:29.541 "base_bdevs_list": [ 00:18:29.541 { 00:18:29.541 "name": "spare", 00:18:29.541 "uuid": "eeb35dba-a8f3-5fff-bae0-2d68ebb87326", 00:18:29.541 "is_configured": true, 00:18:29.541 "data_offset": 2048, 00:18:29.541 "data_size": 63488 00:18:29.541 }, 00:18:29.541 { 00:18:29.541 "name": "BaseBdev2", 00:18:29.541 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:29.541 "is_configured": true, 00:18:29.541 "data_offset": 2048, 00:18:29.541 "data_size": 63488 00:18:29.541 }, 00:18:29.541 { 00:18:29.541 "name": "BaseBdev3", 00:18:29.541 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:29.541 "is_configured": true, 00:18:29.541 "data_offset": 2048, 00:18:29.541 "data_size": 63488 00:18:29.541 }, 00:18:29.541 { 00:18:29.541 "name": "BaseBdev4", 00:18:29.541 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:29.541 "is_configured": true, 00:18:29.541 "data_offset": 2048, 00:18:29.541 "data_size": 63488 00:18:29.541 } 00:18:29.541 ] 00:18:29.541 }' 00:18:29.541 19:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.541 19:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:29.541 19:07:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.541 19:07:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:29.541 19:07:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:30.477 19:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:30.477 19:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.477 19:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.477 19:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.477 19:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.477 19:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.477 19:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.477 19:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.477 19:07:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.477 19:07:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.477 19:07:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.735 19:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.735 "name": "raid_bdev1", 00:18:30.735 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:30.735 "strip_size_kb": 64, 00:18:30.735 "state": "online", 00:18:30.735 "raid_level": "raid5f", 00:18:30.735 "superblock": true, 00:18:30.735 "num_base_bdevs": 4, 00:18:30.735 "num_base_bdevs_discovered": 4, 00:18:30.735 "num_base_bdevs_operational": 4, 00:18:30.735 "process": { 00:18:30.735 "type": "rebuild", 00:18:30.735 "target": "spare", 00:18:30.735 "progress": { 00:18:30.735 "blocks": 65280, 00:18:30.735 "percent": 34 00:18:30.735 } 00:18:30.735 }, 00:18:30.735 "base_bdevs_list": [ 00:18:30.735 { 00:18:30.735 "name": "spare", 00:18:30.735 "uuid": "eeb35dba-a8f3-5fff-bae0-2d68ebb87326", 00:18:30.735 "is_configured": true, 00:18:30.735 "data_offset": 2048, 00:18:30.735 "data_size": 63488 00:18:30.735 }, 00:18:30.735 { 00:18:30.735 "name": "BaseBdev2", 00:18:30.735 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:30.735 "is_configured": true, 00:18:30.735 "data_offset": 2048, 00:18:30.735 "data_size": 63488 00:18:30.735 }, 00:18:30.735 { 00:18:30.735 "name": "BaseBdev3", 00:18:30.735 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:30.735 "is_configured": true, 00:18:30.735 "data_offset": 2048, 00:18:30.735 "data_size": 63488 00:18:30.735 }, 00:18:30.735 { 00:18:30.735 "name": "BaseBdev4", 00:18:30.735 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:30.735 "is_configured": true, 00:18:30.735 "data_offset": 2048, 00:18:30.735 "data_size": 63488 00:18:30.735 } 00:18:30.735 ] 00:18:30.735 }' 00:18:30.735 19:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.735 19:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:30.735 19:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.735 19:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.735 19:07:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:31.672 19:07:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:31.672 19:07:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:31.672 19:07:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.672 19:07:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:31.672 19:07:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:31.672 19:07:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.672 19:07:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.672 19:07:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.672 19:07:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.672 19:07:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.672 19:07:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.672 19:07:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.672 "name": "raid_bdev1", 00:18:31.672 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:31.672 "strip_size_kb": 64, 00:18:31.672 "state": "online", 00:18:31.672 "raid_level": "raid5f", 00:18:31.672 "superblock": true, 00:18:31.672 "num_base_bdevs": 4, 00:18:31.672 "num_base_bdevs_discovered": 4, 00:18:31.672 "num_base_bdevs_operational": 4, 00:18:31.672 "process": { 00:18:31.672 "type": "rebuild", 00:18:31.672 "target": "spare", 00:18:31.672 "progress": { 00:18:31.672 "blocks": 86400, 00:18:31.672 "percent": 45 00:18:31.672 } 00:18:31.672 }, 00:18:31.672 "base_bdevs_list": [ 00:18:31.672 { 00:18:31.672 "name": "spare", 00:18:31.672 "uuid": "eeb35dba-a8f3-5fff-bae0-2d68ebb87326", 00:18:31.672 "is_configured": true, 00:18:31.672 "data_offset": 2048, 00:18:31.672 "data_size": 63488 00:18:31.672 }, 00:18:31.672 { 00:18:31.672 "name": "BaseBdev2", 00:18:31.672 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:31.672 "is_configured": true, 00:18:31.672 "data_offset": 2048, 00:18:31.672 "data_size": 63488 00:18:31.672 }, 00:18:31.672 { 00:18:31.672 "name": "BaseBdev3", 00:18:31.672 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:31.672 "is_configured": true, 00:18:31.672 "data_offset": 2048, 00:18:31.672 "data_size": 63488 00:18:31.672 }, 00:18:31.672 { 00:18:31.672 "name": "BaseBdev4", 00:18:31.672 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:31.672 "is_configured": true, 00:18:31.672 "data_offset": 2048, 00:18:31.672 "data_size": 63488 00:18:31.672 } 00:18:31.672 ] 00:18:31.672 }' 00:18:31.672 19:07:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.931 19:07:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:31.931 19:07:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.931 19:07:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:31.931 19:07:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:32.866 19:07:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:32.866 19:07:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:32.866 19:07:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.866 19:07:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:32.866 19:07:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:32.866 19:07:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.866 19:07:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.866 19:07:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.866 19:07:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.866 19:07:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.866 19:07:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.866 19:07:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.866 "name": "raid_bdev1", 00:18:32.866 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:32.866 "strip_size_kb": 64, 00:18:32.866 "state": "online", 00:18:32.866 "raid_level": "raid5f", 00:18:32.866 "superblock": true, 00:18:32.866 "num_base_bdevs": 4, 00:18:32.866 "num_base_bdevs_discovered": 4, 00:18:32.866 "num_base_bdevs_operational": 4, 00:18:32.866 "process": { 00:18:32.866 "type": "rebuild", 00:18:32.866 "target": "spare", 00:18:32.866 "progress": { 00:18:32.866 "blocks": 109440, 00:18:32.866 "percent": 57 00:18:32.866 } 00:18:32.866 }, 00:18:32.866 "base_bdevs_list": [ 00:18:32.866 { 00:18:32.866 "name": "spare", 00:18:32.866 "uuid": "eeb35dba-a8f3-5fff-bae0-2d68ebb87326", 00:18:32.866 "is_configured": true, 00:18:32.866 "data_offset": 2048, 00:18:32.866 "data_size": 63488 00:18:32.866 }, 00:18:32.866 { 00:18:32.866 "name": "BaseBdev2", 00:18:32.866 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:32.866 "is_configured": true, 00:18:32.866 "data_offset": 2048, 00:18:32.866 "data_size": 63488 00:18:32.866 }, 00:18:32.866 { 00:18:32.866 "name": "BaseBdev3", 00:18:32.866 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:32.866 "is_configured": true, 00:18:32.866 "data_offset": 2048, 00:18:32.866 "data_size": 63488 00:18:32.866 }, 00:18:32.866 { 00:18:32.866 "name": "BaseBdev4", 00:18:32.866 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:32.866 "is_configured": true, 00:18:32.866 "data_offset": 2048, 00:18:32.866 "data_size": 63488 00:18:32.866 } 00:18:32.866 ] 00:18:32.866 }' 00:18:32.866 19:07:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.866 19:07:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:32.866 19:07:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.124 19:07:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:33.124 19:07:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:34.062 19:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:34.062 19:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:34.062 19:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.062 19:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:34.062 19:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:34.062 19:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.062 19:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.062 19:08:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.062 19:08:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.062 19:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.062 19:08:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.062 19:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.062 "name": "raid_bdev1", 00:18:34.062 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:34.062 "strip_size_kb": 64, 00:18:34.062 "state": "online", 00:18:34.062 "raid_level": "raid5f", 00:18:34.062 "superblock": true, 00:18:34.062 "num_base_bdevs": 4, 00:18:34.062 "num_base_bdevs_discovered": 4, 00:18:34.062 "num_base_bdevs_operational": 4, 00:18:34.062 "process": { 00:18:34.062 "type": "rebuild", 00:18:34.062 "target": "spare", 00:18:34.062 "progress": { 00:18:34.062 "blocks": 130560, 00:18:34.062 "percent": 68 00:18:34.062 } 00:18:34.062 }, 00:18:34.062 "base_bdevs_list": [ 00:18:34.062 { 00:18:34.062 "name": "spare", 00:18:34.062 "uuid": "eeb35dba-a8f3-5fff-bae0-2d68ebb87326", 00:18:34.062 "is_configured": true, 00:18:34.062 "data_offset": 2048, 00:18:34.062 "data_size": 63488 00:18:34.062 }, 00:18:34.062 { 00:18:34.062 "name": "BaseBdev2", 00:18:34.062 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:34.062 "is_configured": true, 00:18:34.062 "data_offset": 2048, 00:18:34.062 "data_size": 63488 00:18:34.062 }, 00:18:34.062 { 00:18:34.062 "name": "BaseBdev3", 00:18:34.062 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:34.062 "is_configured": true, 00:18:34.062 "data_offset": 2048, 00:18:34.062 "data_size": 63488 00:18:34.062 }, 00:18:34.062 { 00:18:34.062 "name": "BaseBdev4", 00:18:34.062 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:34.062 "is_configured": true, 00:18:34.062 "data_offset": 2048, 00:18:34.062 "data_size": 63488 00:18:34.062 } 00:18:34.062 ] 00:18:34.062 }' 00:18:34.062 19:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.062 19:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:34.062 19:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.323 19:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:34.323 19:08:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:35.257 19:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:35.257 19:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.257 19:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.257 19:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.257 19:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.257 19:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.257 19:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.257 19:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.257 19:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.257 19:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.257 19:08:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.257 19:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.257 "name": "raid_bdev1", 00:18:35.257 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:35.257 "strip_size_kb": 64, 00:18:35.257 "state": "online", 00:18:35.257 "raid_level": "raid5f", 00:18:35.257 "superblock": true, 00:18:35.257 "num_base_bdevs": 4, 00:18:35.257 "num_base_bdevs_discovered": 4, 00:18:35.257 "num_base_bdevs_operational": 4, 00:18:35.257 "process": { 00:18:35.257 "type": "rebuild", 00:18:35.257 "target": "spare", 00:18:35.257 "progress": { 00:18:35.257 "blocks": 153600, 00:18:35.257 "percent": 80 00:18:35.257 } 00:18:35.257 }, 00:18:35.257 "base_bdevs_list": [ 00:18:35.257 { 00:18:35.257 "name": "spare", 00:18:35.257 "uuid": "eeb35dba-a8f3-5fff-bae0-2d68ebb87326", 00:18:35.257 "is_configured": true, 00:18:35.257 "data_offset": 2048, 00:18:35.257 "data_size": 63488 00:18:35.257 }, 00:18:35.257 { 00:18:35.257 "name": "BaseBdev2", 00:18:35.257 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:35.257 "is_configured": true, 00:18:35.257 "data_offset": 2048, 00:18:35.257 "data_size": 63488 00:18:35.257 }, 00:18:35.257 { 00:18:35.257 "name": "BaseBdev3", 00:18:35.257 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:35.257 "is_configured": true, 00:18:35.257 "data_offset": 2048, 00:18:35.257 "data_size": 63488 00:18:35.257 }, 00:18:35.257 { 00:18:35.257 "name": "BaseBdev4", 00:18:35.257 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:35.257 "is_configured": true, 00:18:35.257 "data_offset": 2048, 00:18:35.257 "data_size": 63488 00:18:35.257 } 00:18:35.257 ] 00:18:35.257 }' 00:18:35.257 19:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.257 19:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.257 19:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.257 19:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.257 19:08:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:36.631 19:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:36.631 19:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.631 19:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.631 19:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.631 19:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.631 19:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.631 19:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.631 19:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.631 19:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.631 19:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.631 19:08:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.631 19:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.631 "name": "raid_bdev1", 00:18:36.631 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:36.631 "strip_size_kb": 64, 00:18:36.631 "state": "online", 00:18:36.631 "raid_level": "raid5f", 00:18:36.631 "superblock": true, 00:18:36.631 "num_base_bdevs": 4, 00:18:36.631 "num_base_bdevs_discovered": 4, 00:18:36.631 "num_base_bdevs_operational": 4, 00:18:36.631 "process": { 00:18:36.631 "type": "rebuild", 00:18:36.631 "target": "spare", 00:18:36.631 "progress": { 00:18:36.631 "blocks": 174720, 00:18:36.631 "percent": 91 00:18:36.631 } 00:18:36.631 }, 00:18:36.631 "base_bdevs_list": [ 00:18:36.631 { 00:18:36.631 "name": "spare", 00:18:36.631 "uuid": "eeb35dba-a8f3-5fff-bae0-2d68ebb87326", 00:18:36.631 "is_configured": true, 00:18:36.631 "data_offset": 2048, 00:18:36.631 "data_size": 63488 00:18:36.631 }, 00:18:36.631 { 00:18:36.631 "name": "BaseBdev2", 00:18:36.631 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:36.631 "is_configured": true, 00:18:36.631 "data_offset": 2048, 00:18:36.631 "data_size": 63488 00:18:36.631 }, 00:18:36.631 { 00:18:36.631 "name": "BaseBdev3", 00:18:36.631 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:36.631 "is_configured": true, 00:18:36.631 "data_offset": 2048, 00:18:36.631 "data_size": 63488 00:18:36.631 }, 00:18:36.631 { 00:18:36.631 "name": "BaseBdev4", 00:18:36.631 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:36.631 "is_configured": true, 00:18:36.631 "data_offset": 2048, 00:18:36.631 "data_size": 63488 00:18:36.631 } 00:18:36.631 ] 00:18:36.631 }' 00:18:36.631 19:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.631 19:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.631 19:08:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.631 19:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.631 19:08:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:37.196 [2024-11-26 19:08:03.702992] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:37.196 [2024-11-26 19:08:03.703130] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:37.196 [2024-11-26 19:08:03.703394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.452 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:37.452 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.452 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.452 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.452 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.452 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.453 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.453 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.453 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.453 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.453 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.710 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.710 "name": "raid_bdev1", 00:18:37.710 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:37.710 "strip_size_kb": 64, 00:18:37.710 "state": "online", 00:18:37.710 "raid_level": "raid5f", 00:18:37.710 "superblock": true, 00:18:37.710 "num_base_bdevs": 4, 00:18:37.710 "num_base_bdevs_discovered": 4, 00:18:37.710 "num_base_bdevs_operational": 4, 00:18:37.710 "base_bdevs_list": [ 00:18:37.710 { 00:18:37.710 "name": "spare", 00:18:37.710 "uuid": "eeb35dba-a8f3-5fff-bae0-2d68ebb87326", 00:18:37.710 "is_configured": true, 00:18:37.710 "data_offset": 2048, 00:18:37.710 "data_size": 63488 00:18:37.710 }, 00:18:37.710 { 00:18:37.710 "name": "BaseBdev2", 00:18:37.710 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:37.710 "is_configured": true, 00:18:37.710 "data_offset": 2048, 00:18:37.710 "data_size": 63488 00:18:37.710 }, 00:18:37.710 { 00:18:37.710 "name": "BaseBdev3", 00:18:37.710 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:37.710 "is_configured": true, 00:18:37.710 "data_offset": 2048, 00:18:37.710 "data_size": 63488 00:18:37.710 }, 00:18:37.710 { 00:18:37.710 "name": "BaseBdev4", 00:18:37.710 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:37.710 "is_configured": true, 00:18:37.710 "data_offset": 2048, 00:18:37.710 "data_size": 63488 00:18:37.710 } 00:18:37.710 ] 00:18:37.710 }' 00:18:37.710 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.710 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:37.710 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.710 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:37.710 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:37.710 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:37.710 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.710 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:37.710 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:37.710 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.710 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.710 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.710 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.710 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.710 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.710 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.710 "name": "raid_bdev1", 00:18:37.710 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:37.710 "strip_size_kb": 64, 00:18:37.710 "state": "online", 00:18:37.711 "raid_level": "raid5f", 00:18:37.711 "superblock": true, 00:18:37.711 "num_base_bdevs": 4, 00:18:37.711 "num_base_bdevs_discovered": 4, 00:18:37.711 "num_base_bdevs_operational": 4, 00:18:37.711 "base_bdevs_list": [ 00:18:37.711 { 00:18:37.711 "name": "spare", 00:18:37.711 "uuid": "eeb35dba-a8f3-5fff-bae0-2d68ebb87326", 00:18:37.711 "is_configured": true, 00:18:37.711 "data_offset": 2048, 00:18:37.711 "data_size": 63488 00:18:37.711 }, 00:18:37.711 { 00:18:37.711 "name": "BaseBdev2", 00:18:37.711 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:37.711 "is_configured": true, 00:18:37.711 "data_offset": 2048, 00:18:37.711 "data_size": 63488 00:18:37.711 }, 00:18:37.711 { 00:18:37.711 "name": "BaseBdev3", 00:18:37.711 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:37.711 "is_configured": true, 00:18:37.711 "data_offset": 2048, 00:18:37.711 "data_size": 63488 00:18:37.711 }, 00:18:37.711 { 00:18:37.711 "name": "BaseBdev4", 00:18:37.711 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:37.711 "is_configured": true, 00:18:37.711 "data_offset": 2048, 00:18:37.711 "data_size": 63488 00:18:37.711 } 00:18:37.711 ] 00:18:37.711 }' 00:18:37.711 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.711 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:37.711 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.969 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:37.969 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:37.969 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.969 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.969 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:37.969 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:37.969 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:37.969 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.969 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.969 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.969 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.969 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.969 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.969 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.969 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.969 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.969 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.969 "name": "raid_bdev1", 00:18:37.969 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:37.969 "strip_size_kb": 64, 00:18:37.969 "state": "online", 00:18:37.969 "raid_level": "raid5f", 00:18:37.969 "superblock": true, 00:18:37.969 "num_base_bdevs": 4, 00:18:37.969 "num_base_bdevs_discovered": 4, 00:18:37.969 "num_base_bdevs_operational": 4, 00:18:37.969 "base_bdevs_list": [ 00:18:37.969 { 00:18:37.969 "name": "spare", 00:18:37.969 "uuid": "eeb35dba-a8f3-5fff-bae0-2d68ebb87326", 00:18:37.969 "is_configured": true, 00:18:37.969 "data_offset": 2048, 00:18:37.969 "data_size": 63488 00:18:37.969 }, 00:18:37.969 { 00:18:37.969 "name": "BaseBdev2", 00:18:37.969 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:37.969 "is_configured": true, 00:18:37.969 "data_offset": 2048, 00:18:37.969 "data_size": 63488 00:18:37.969 }, 00:18:37.969 { 00:18:37.969 "name": "BaseBdev3", 00:18:37.969 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:37.969 "is_configured": true, 00:18:37.969 "data_offset": 2048, 00:18:37.969 "data_size": 63488 00:18:37.969 }, 00:18:37.969 { 00:18:37.969 "name": "BaseBdev4", 00:18:37.969 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:37.969 "is_configured": true, 00:18:37.969 "data_offset": 2048, 00:18:37.969 "data_size": 63488 00:18:37.969 } 00:18:37.969 ] 00:18:37.969 }' 00:18:37.969 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.969 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.533 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:38.533 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.533 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.533 [2024-11-26 19:08:04.901969] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.533 [2024-11-26 19:08:04.902195] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.533 [2024-11-26 19:08:04.902376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.533 [2024-11-26 19:08:04.902532] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.533 [2024-11-26 19:08:04.902571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:38.533 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.533 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.533 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.533 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:38.533 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.533 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.533 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:38.533 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:38.533 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:38.533 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:38.533 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:38.533 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:38.533 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:38.533 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:38.533 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:38.533 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:38.533 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:38.533 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:38.533 19:08:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:38.790 /dev/nbd0 00:18:38.790 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:38.790 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:38.790 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:38.790 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:38.790 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:38.790 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:38.790 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:38.790 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:38.790 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:38.790 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:38.790 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:38.790 1+0 records in 00:18:38.790 1+0 records out 00:18:38.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392367 s, 10.4 MB/s 00:18:38.790 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:38.790 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:38.790 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:38.790 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:38.790 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:38.790 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:38.790 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:38.790 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:39.047 /dev/nbd1 00:18:39.047 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:39.047 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:39.047 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:39.047 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:39.047 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:39.047 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:39.047 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:39.047 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:39.047 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:39.047 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:39.047 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:39.047 1+0 records in 00:18:39.047 1+0 records out 00:18:39.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565204 s, 7.2 MB/s 00:18:39.047 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:39.047 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:39.047 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:39.047 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:39.047 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:39.048 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:39.048 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:39.048 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:39.309 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:39.309 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:39.309 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:39.309 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:39.309 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:39.309 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:39.309 19:08:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:39.884 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:39.884 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:39.884 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:39.884 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:39.884 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:39.884 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:39.884 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:39.884 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:39.884 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:39.884 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:40.142 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:40.142 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:40.142 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:40.142 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:40.142 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:40.142 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:40.142 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:40.142 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:40.142 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:40.142 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:40.142 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.142 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.142 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.142 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:40.142 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.142 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.142 [2024-11-26 19:08:06.524029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:40.142 [2024-11-26 19:08:06.524121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.142 [2024-11-26 19:08:06.524180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:40.142 [2024-11-26 19:08:06.524200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.142 [2024-11-26 19:08:06.527607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.142 [2024-11-26 19:08:06.527675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:40.142 [2024-11-26 19:08:06.527865] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:40.142 [2024-11-26 19:08:06.527973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:40.143 [2024-11-26 19:08:06.528172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:40.143 [2024-11-26 19:08:06.528435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:40.143 [2024-11-26 19:08:06.528581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:40.143 spare 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.143 [2024-11-26 19:08:06.628769] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:40.143 [2024-11-26 19:08:06.628943] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:40.143 [2024-11-26 19:08:06.629562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:18:40.143 [2024-11-26 19:08:06.636092] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:40.143 [2024-11-26 19:08:06.636294] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:40.143 [2024-11-26 19:08:06.636670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.143 "name": "raid_bdev1", 00:18:40.143 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:40.143 "strip_size_kb": 64, 00:18:40.143 "state": "online", 00:18:40.143 "raid_level": "raid5f", 00:18:40.143 "superblock": true, 00:18:40.143 "num_base_bdevs": 4, 00:18:40.143 "num_base_bdevs_discovered": 4, 00:18:40.143 "num_base_bdevs_operational": 4, 00:18:40.143 "base_bdevs_list": [ 00:18:40.143 { 00:18:40.143 "name": "spare", 00:18:40.143 "uuid": "eeb35dba-a8f3-5fff-bae0-2d68ebb87326", 00:18:40.143 "is_configured": true, 00:18:40.143 "data_offset": 2048, 00:18:40.143 "data_size": 63488 00:18:40.143 }, 00:18:40.143 { 00:18:40.143 "name": "BaseBdev2", 00:18:40.143 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:40.143 "is_configured": true, 00:18:40.143 "data_offset": 2048, 00:18:40.143 "data_size": 63488 00:18:40.143 }, 00:18:40.143 { 00:18:40.143 "name": "BaseBdev3", 00:18:40.143 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:40.143 "is_configured": true, 00:18:40.143 "data_offset": 2048, 00:18:40.143 "data_size": 63488 00:18:40.143 }, 00:18:40.143 { 00:18:40.143 "name": "BaseBdev4", 00:18:40.143 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:40.143 "is_configured": true, 00:18:40.143 "data_offset": 2048, 00:18:40.143 "data_size": 63488 00:18:40.143 } 00:18:40.143 ] 00:18:40.143 }' 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.143 19:08:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.816 "name": "raid_bdev1", 00:18:40.816 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:40.816 "strip_size_kb": 64, 00:18:40.816 "state": "online", 00:18:40.816 "raid_level": "raid5f", 00:18:40.816 "superblock": true, 00:18:40.816 "num_base_bdevs": 4, 00:18:40.816 "num_base_bdevs_discovered": 4, 00:18:40.816 "num_base_bdevs_operational": 4, 00:18:40.816 "base_bdevs_list": [ 00:18:40.816 { 00:18:40.816 "name": "spare", 00:18:40.816 "uuid": "eeb35dba-a8f3-5fff-bae0-2d68ebb87326", 00:18:40.816 "is_configured": true, 00:18:40.816 "data_offset": 2048, 00:18:40.816 "data_size": 63488 00:18:40.816 }, 00:18:40.816 { 00:18:40.816 "name": "BaseBdev2", 00:18:40.816 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:40.816 "is_configured": true, 00:18:40.816 "data_offset": 2048, 00:18:40.816 "data_size": 63488 00:18:40.816 }, 00:18:40.816 { 00:18:40.816 "name": "BaseBdev3", 00:18:40.816 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:40.816 "is_configured": true, 00:18:40.816 "data_offset": 2048, 00:18:40.816 "data_size": 63488 00:18:40.816 }, 00:18:40.816 { 00:18:40.816 "name": "BaseBdev4", 00:18:40.816 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:40.816 "is_configured": true, 00:18:40.816 "data_offset": 2048, 00:18:40.816 "data_size": 63488 00:18:40.816 } 00:18:40.816 ] 00:18:40.816 }' 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.816 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.817 [2024-11-26 19:08:07.381365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:40.817 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.817 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:40.817 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.817 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.817 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:40.817 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:40.817 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:40.817 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.817 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.817 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.817 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.817 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.817 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.817 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.817 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.817 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.076 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.076 "name": "raid_bdev1", 00:18:41.076 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:41.076 "strip_size_kb": 64, 00:18:41.076 "state": "online", 00:18:41.076 "raid_level": "raid5f", 00:18:41.076 "superblock": true, 00:18:41.076 "num_base_bdevs": 4, 00:18:41.076 "num_base_bdevs_discovered": 3, 00:18:41.076 "num_base_bdevs_operational": 3, 00:18:41.076 "base_bdevs_list": [ 00:18:41.076 { 00:18:41.076 "name": null, 00:18:41.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.076 "is_configured": false, 00:18:41.076 "data_offset": 0, 00:18:41.076 "data_size": 63488 00:18:41.076 }, 00:18:41.076 { 00:18:41.076 "name": "BaseBdev2", 00:18:41.076 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:41.076 "is_configured": true, 00:18:41.076 "data_offset": 2048, 00:18:41.076 "data_size": 63488 00:18:41.076 }, 00:18:41.076 { 00:18:41.076 "name": "BaseBdev3", 00:18:41.076 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:41.076 "is_configured": true, 00:18:41.076 "data_offset": 2048, 00:18:41.076 "data_size": 63488 00:18:41.076 }, 00:18:41.076 { 00:18:41.076 "name": "BaseBdev4", 00:18:41.076 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:41.076 "is_configured": true, 00:18:41.076 "data_offset": 2048, 00:18:41.076 "data_size": 63488 00:18:41.076 } 00:18:41.076 ] 00:18:41.076 }' 00:18:41.076 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.076 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.642 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:41.642 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.642 19:08:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.642 [2024-11-26 19:08:07.989477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:41.642 [2024-11-26 19:08:07.989834] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:41.642 [2024-11-26 19:08:07.989870] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:41.642 [2024-11-26 19:08:07.989933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:41.642 [2024-11-26 19:08:08.004006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:18:41.642 19:08:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.642 19:08:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:41.642 [2024-11-26 19:08:08.013325] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:42.579 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.579 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.579 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.579 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.579 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.579 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.579 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.579 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.579 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.579 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.579 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.579 "name": "raid_bdev1", 00:18:42.579 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:42.579 "strip_size_kb": 64, 00:18:42.579 "state": "online", 00:18:42.579 "raid_level": "raid5f", 00:18:42.579 "superblock": true, 00:18:42.579 "num_base_bdevs": 4, 00:18:42.579 "num_base_bdevs_discovered": 4, 00:18:42.579 "num_base_bdevs_operational": 4, 00:18:42.579 "process": { 00:18:42.579 "type": "rebuild", 00:18:42.579 "target": "spare", 00:18:42.579 "progress": { 00:18:42.579 "blocks": 17280, 00:18:42.579 "percent": 9 00:18:42.579 } 00:18:42.579 }, 00:18:42.579 "base_bdevs_list": [ 00:18:42.579 { 00:18:42.579 "name": "spare", 00:18:42.579 "uuid": "eeb35dba-a8f3-5fff-bae0-2d68ebb87326", 00:18:42.579 "is_configured": true, 00:18:42.579 "data_offset": 2048, 00:18:42.579 "data_size": 63488 00:18:42.579 }, 00:18:42.579 { 00:18:42.579 "name": "BaseBdev2", 00:18:42.579 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:42.579 "is_configured": true, 00:18:42.579 "data_offset": 2048, 00:18:42.579 "data_size": 63488 00:18:42.579 }, 00:18:42.579 { 00:18:42.579 "name": "BaseBdev3", 00:18:42.579 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:42.579 "is_configured": true, 00:18:42.579 "data_offset": 2048, 00:18:42.579 "data_size": 63488 00:18:42.579 }, 00:18:42.579 { 00:18:42.579 "name": "BaseBdev4", 00:18:42.579 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:42.579 "is_configured": true, 00:18:42.579 "data_offset": 2048, 00:18:42.579 "data_size": 63488 00:18:42.579 } 00:18:42.579 ] 00:18:42.579 }' 00:18:42.579 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.579 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.579 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.579 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.579 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:42.579 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.579 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.579 [2024-11-26 19:08:09.163150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:42.838 [2024-11-26 19:08:09.228373] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:42.838 [2024-11-26 19:08:09.228746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.838 [2024-11-26 19:08:09.228913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:42.839 [2024-11-26 19:08:09.228977] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:42.839 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.839 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:42.839 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.839 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.839 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:42.839 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:42.839 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:42.839 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.839 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.839 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.839 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.839 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.839 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.839 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.839 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.839 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.839 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.839 "name": "raid_bdev1", 00:18:42.839 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:42.839 "strip_size_kb": 64, 00:18:42.839 "state": "online", 00:18:42.839 "raid_level": "raid5f", 00:18:42.839 "superblock": true, 00:18:42.839 "num_base_bdevs": 4, 00:18:42.839 "num_base_bdevs_discovered": 3, 00:18:42.839 "num_base_bdevs_operational": 3, 00:18:42.839 "base_bdevs_list": [ 00:18:42.839 { 00:18:42.839 "name": null, 00:18:42.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.839 "is_configured": false, 00:18:42.839 "data_offset": 0, 00:18:42.839 "data_size": 63488 00:18:42.839 }, 00:18:42.839 { 00:18:42.839 "name": "BaseBdev2", 00:18:42.839 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:42.839 "is_configured": true, 00:18:42.839 "data_offset": 2048, 00:18:42.839 "data_size": 63488 00:18:42.839 }, 00:18:42.839 { 00:18:42.839 "name": "BaseBdev3", 00:18:42.839 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:42.839 "is_configured": true, 00:18:42.839 "data_offset": 2048, 00:18:42.839 "data_size": 63488 00:18:42.839 }, 00:18:42.839 { 00:18:42.839 "name": "BaseBdev4", 00:18:42.839 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:42.839 "is_configured": true, 00:18:42.839 "data_offset": 2048, 00:18:42.839 "data_size": 63488 00:18:42.839 } 00:18:42.839 ] 00:18:42.839 }' 00:18:42.839 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.839 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.406 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:43.406 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.406 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.406 [2024-11-26 19:08:09.811268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:43.406 [2024-11-26 19:08:09.811372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.406 [2024-11-26 19:08:09.811415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:43.406 [2024-11-26 19:08:09.811435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.406 [2024-11-26 19:08:09.812093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.406 [2024-11-26 19:08:09.812134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:43.406 [2024-11-26 19:08:09.812277] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:43.406 [2024-11-26 19:08:09.812327] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:43.406 [2024-11-26 19:08:09.812349] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:43.406 [2024-11-26 19:08:09.812386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:43.406 [2024-11-26 19:08:09.826195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:18:43.406 spare 00:18:43.406 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.406 19:08:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:43.406 [2024-11-26 19:08:09.835405] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:44.342 19:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.342 19:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.342 19:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.342 19:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.342 19:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.342 19:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.342 19:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.342 19:08:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.342 19:08:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.342 19:08:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.342 19:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.342 "name": "raid_bdev1", 00:18:44.342 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:44.342 "strip_size_kb": 64, 00:18:44.342 "state": "online", 00:18:44.342 "raid_level": "raid5f", 00:18:44.342 "superblock": true, 00:18:44.342 "num_base_bdevs": 4, 00:18:44.342 "num_base_bdevs_discovered": 4, 00:18:44.342 "num_base_bdevs_operational": 4, 00:18:44.342 "process": { 00:18:44.342 "type": "rebuild", 00:18:44.342 "target": "spare", 00:18:44.342 "progress": { 00:18:44.342 "blocks": 17280, 00:18:44.342 "percent": 9 00:18:44.342 } 00:18:44.342 }, 00:18:44.342 "base_bdevs_list": [ 00:18:44.342 { 00:18:44.342 "name": "spare", 00:18:44.342 "uuid": "eeb35dba-a8f3-5fff-bae0-2d68ebb87326", 00:18:44.342 "is_configured": true, 00:18:44.342 "data_offset": 2048, 00:18:44.342 "data_size": 63488 00:18:44.342 }, 00:18:44.342 { 00:18:44.342 "name": "BaseBdev2", 00:18:44.342 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:44.342 "is_configured": true, 00:18:44.342 "data_offset": 2048, 00:18:44.342 "data_size": 63488 00:18:44.342 }, 00:18:44.342 { 00:18:44.342 "name": "BaseBdev3", 00:18:44.342 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:44.342 "is_configured": true, 00:18:44.342 "data_offset": 2048, 00:18:44.342 "data_size": 63488 00:18:44.342 }, 00:18:44.342 { 00:18:44.342 "name": "BaseBdev4", 00:18:44.342 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:44.342 "is_configured": true, 00:18:44.342 "data_offset": 2048, 00:18:44.343 "data_size": 63488 00:18:44.343 } 00:18:44.343 ] 00:18:44.343 }' 00:18:44.343 19:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.343 19:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:44.343 19:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.601 19:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.601 19:08:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:44.601 19:08:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.601 19:08:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.601 [2024-11-26 19:08:10.994214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:44.601 [2024-11-26 19:08:11.051731] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:44.601 [2024-11-26 19:08:11.052055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.601 [2024-11-26 19:08:11.052220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:44.601 [2024-11-26 19:08:11.052387] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:44.601 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.601 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:44.601 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.601 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.601 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:44.601 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:44.601 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:44.601 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.601 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.601 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.601 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.601 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.601 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.601 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.601 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.601 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.601 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.601 "name": "raid_bdev1", 00:18:44.601 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:44.601 "strip_size_kb": 64, 00:18:44.601 "state": "online", 00:18:44.601 "raid_level": "raid5f", 00:18:44.601 "superblock": true, 00:18:44.601 "num_base_bdevs": 4, 00:18:44.601 "num_base_bdevs_discovered": 3, 00:18:44.601 "num_base_bdevs_operational": 3, 00:18:44.601 "base_bdevs_list": [ 00:18:44.601 { 00:18:44.601 "name": null, 00:18:44.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.601 "is_configured": false, 00:18:44.601 "data_offset": 0, 00:18:44.601 "data_size": 63488 00:18:44.601 }, 00:18:44.601 { 00:18:44.601 "name": "BaseBdev2", 00:18:44.601 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:44.601 "is_configured": true, 00:18:44.601 "data_offset": 2048, 00:18:44.601 "data_size": 63488 00:18:44.601 }, 00:18:44.601 { 00:18:44.601 "name": "BaseBdev3", 00:18:44.601 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:44.601 "is_configured": true, 00:18:44.601 "data_offset": 2048, 00:18:44.601 "data_size": 63488 00:18:44.601 }, 00:18:44.601 { 00:18:44.601 "name": "BaseBdev4", 00:18:44.601 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:44.601 "is_configured": true, 00:18:44.601 "data_offset": 2048, 00:18:44.601 "data_size": 63488 00:18:44.601 } 00:18:44.601 ] 00:18:44.601 }' 00:18:44.601 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.601 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.166 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:45.166 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.166 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:45.166 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:45.166 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.166 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.166 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.166 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.166 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.166 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.166 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.166 "name": "raid_bdev1", 00:18:45.166 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:45.166 "strip_size_kb": 64, 00:18:45.166 "state": "online", 00:18:45.166 "raid_level": "raid5f", 00:18:45.166 "superblock": true, 00:18:45.166 "num_base_bdevs": 4, 00:18:45.166 "num_base_bdevs_discovered": 3, 00:18:45.166 "num_base_bdevs_operational": 3, 00:18:45.166 "base_bdevs_list": [ 00:18:45.166 { 00:18:45.166 "name": null, 00:18:45.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.166 "is_configured": false, 00:18:45.166 "data_offset": 0, 00:18:45.166 "data_size": 63488 00:18:45.166 }, 00:18:45.166 { 00:18:45.166 "name": "BaseBdev2", 00:18:45.166 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:45.166 "is_configured": true, 00:18:45.166 "data_offset": 2048, 00:18:45.166 "data_size": 63488 00:18:45.166 }, 00:18:45.166 { 00:18:45.166 "name": "BaseBdev3", 00:18:45.166 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:45.166 "is_configured": true, 00:18:45.166 "data_offset": 2048, 00:18:45.166 "data_size": 63488 00:18:45.166 }, 00:18:45.166 { 00:18:45.166 "name": "BaseBdev4", 00:18:45.166 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:45.166 "is_configured": true, 00:18:45.166 "data_offset": 2048, 00:18:45.166 "data_size": 63488 00:18:45.166 } 00:18:45.166 ] 00:18:45.166 }' 00:18:45.166 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.166 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:45.166 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.166 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:45.166 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:45.166 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.166 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.166 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.166 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:45.166 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.166 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.166 [2024-11-26 19:08:11.783602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:45.166 [2024-11-26 19:08:11.783676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.166 [2024-11-26 19:08:11.783723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:45.166 [2024-11-26 19:08:11.783739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.166 [2024-11-26 19:08:11.784434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.166 [2024-11-26 19:08:11.784466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:45.166 [2024-11-26 19:08:11.784589] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:45.166 [2024-11-26 19:08:11.784614] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:45.166 [2024-11-26 19:08:11.784632] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:45.166 [2024-11-26 19:08:11.784646] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:45.424 BaseBdev1 00:18:45.424 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.424 19:08:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:46.357 19:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:46.357 19:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.357 19:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.357 19:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:46.357 19:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:46.357 19:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:46.357 19:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.357 19:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.357 19:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.357 19:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.357 19:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.357 19:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.357 19:08:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.357 19:08:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.357 19:08:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.357 19:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.357 "name": "raid_bdev1", 00:18:46.357 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:46.357 "strip_size_kb": 64, 00:18:46.357 "state": "online", 00:18:46.357 "raid_level": "raid5f", 00:18:46.357 "superblock": true, 00:18:46.357 "num_base_bdevs": 4, 00:18:46.357 "num_base_bdevs_discovered": 3, 00:18:46.357 "num_base_bdevs_operational": 3, 00:18:46.357 "base_bdevs_list": [ 00:18:46.357 { 00:18:46.357 "name": null, 00:18:46.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.357 "is_configured": false, 00:18:46.357 "data_offset": 0, 00:18:46.357 "data_size": 63488 00:18:46.357 }, 00:18:46.357 { 00:18:46.357 "name": "BaseBdev2", 00:18:46.357 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:46.357 "is_configured": true, 00:18:46.357 "data_offset": 2048, 00:18:46.357 "data_size": 63488 00:18:46.357 }, 00:18:46.357 { 00:18:46.357 "name": "BaseBdev3", 00:18:46.357 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:46.357 "is_configured": true, 00:18:46.357 "data_offset": 2048, 00:18:46.357 "data_size": 63488 00:18:46.357 }, 00:18:46.357 { 00:18:46.357 "name": "BaseBdev4", 00:18:46.357 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:46.357 "is_configured": true, 00:18:46.357 "data_offset": 2048, 00:18:46.357 "data_size": 63488 00:18:46.357 } 00:18:46.357 ] 00:18:46.357 }' 00:18:46.357 19:08:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.357 19:08:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.924 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:46.924 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.924 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:46.924 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:46.924 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.924 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.924 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.924 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.924 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.924 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.924 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.924 "name": "raid_bdev1", 00:18:46.924 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:46.924 "strip_size_kb": 64, 00:18:46.924 "state": "online", 00:18:46.924 "raid_level": "raid5f", 00:18:46.924 "superblock": true, 00:18:46.924 "num_base_bdevs": 4, 00:18:46.924 "num_base_bdevs_discovered": 3, 00:18:46.924 "num_base_bdevs_operational": 3, 00:18:46.924 "base_bdevs_list": [ 00:18:46.924 { 00:18:46.924 "name": null, 00:18:46.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.924 "is_configured": false, 00:18:46.924 "data_offset": 0, 00:18:46.924 "data_size": 63488 00:18:46.924 }, 00:18:46.924 { 00:18:46.924 "name": "BaseBdev2", 00:18:46.924 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:46.924 "is_configured": true, 00:18:46.924 "data_offset": 2048, 00:18:46.924 "data_size": 63488 00:18:46.924 }, 00:18:46.924 { 00:18:46.924 "name": "BaseBdev3", 00:18:46.924 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:46.924 "is_configured": true, 00:18:46.924 "data_offset": 2048, 00:18:46.924 "data_size": 63488 00:18:46.924 }, 00:18:46.924 { 00:18:46.924 "name": "BaseBdev4", 00:18:46.925 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:46.925 "is_configured": true, 00:18:46.925 "data_offset": 2048, 00:18:46.925 "data_size": 63488 00:18:46.925 } 00:18:46.925 ] 00:18:46.925 }' 00:18:46.925 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.925 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:46.925 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.925 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:46.925 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:46.925 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:18:46.925 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:46.925 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:46.925 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.925 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:46.925 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.925 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:46.925 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.925 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.925 [2024-11-26 19:08:13.504139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:46.925 [2024-11-26 19:08:13.504422] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:46.925 [2024-11-26 19:08:13.504448] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:46.925 request: 00:18:46.925 { 00:18:46.925 "base_bdev": "BaseBdev1", 00:18:46.925 "raid_bdev": "raid_bdev1", 00:18:46.925 "method": "bdev_raid_add_base_bdev", 00:18:46.925 "req_id": 1 00:18:46.925 } 00:18:46.925 Got JSON-RPC error response 00:18:46.925 response: 00:18:46.925 { 00:18:46.925 "code": -22, 00:18:46.925 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:46.925 } 00:18:46.925 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:46.925 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:18:46.925 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:46.925 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:46.925 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:46.925 19:08:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:48.303 19:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:48.303 19:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.303 19:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.303 19:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:48.303 19:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:48.303 19:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:48.303 19:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.303 19:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.303 19:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.303 19:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.303 19:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.303 19:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.303 19:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.303 19:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.303 19:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.303 19:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.303 "name": "raid_bdev1", 00:18:48.303 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:48.303 "strip_size_kb": 64, 00:18:48.303 "state": "online", 00:18:48.303 "raid_level": "raid5f", 00:18:48.303 "superblock": true, 00:18:48.303 "num_base_bdevs": 4, 00:18:48.303 "num_base_bdevs_discovered": 3, 00:18:48.303 "num_base_bdevs_operational": 3, 00:18:48.303 "base_bdevs_list": [ 00:18:48.303 { 00:18:48.303 "name": null, 00:18:48.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.304 "is_configured": false, 00:18:48.304 "data_offset": 0, 00:18:48.304 "data_size": 63488 00:18:48.304 }, 00:18:48.304 { 00:18:48.304 "name": "BaseBdev2", 00:18:48.304 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:48.304 "is_configured": true, 00:18:48.304 "data_offset": 2048, 00:18:48.304 "data_size": 63488 00:18:48.304 }, 00:18:48.304 { 00:18:48.304 "name": "BaseBdev3", 00:18:48.304 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:48.304 "is_configured": true, 00:18:48.304 "data_offset": 2048, 00:18:48.304 "data_size": 63488 00:18:48.304 }, 00:18:48.304 { 00:18:48.304 "name": "BaseBdev4", 00:18:48.304 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:48.304 "is_configured": true, 00:18:48.304 "data_offset": 2048, 00:18:48.304 "data_size": 63488 00:18:48.304 } 00:18:48.304 ] 00:18:48.304 }' 00:18:48.304 19:08:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.304 19:08:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.563 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:48.563 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.563 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:48.563 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:48.563 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.563 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.563 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.563 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.563 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.563 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.563 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.563 "name": "raid_bdev1", 00:18:48.563 "uuid": "6ef38368-d4ff-4a99-b6b6-c25c78aaa38a", 00:18:48.563 "strip_size_kb": 64, 00:18:48.563 "state": "online", 00:18:48.563 "raid_level": "raid5f", 00:18:48.563 "superblock": true, 00:18:48.563 "num_base_bdevs": 4, 00:18:48.563 "num_base_bdevs_discovered": 3, 00:18:48.563 "num_base_bdevs_operational": 3, 00:18:48.563 "base_bdevs_list": [ 00:18:48.563 { 00:18:48.563 "name": null, 00:18:48.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.563 "is_configured": false, 00:18:48.563 "data_offset": 0, 00:18:48.563 "data_size": 63488 00:18:48.563 }, 00:18:48.563 { 00:18:48.563 "name": "BaseBdev2", 00:18:48.563 "uuid": "34977652-2a89-5fc0-928d-db7bf0df327d", 00:18:48.563 "is_configured": true, 00:18:48.563 "data_offset": 2048, 00:18:48.563 "data_size": 63488 00:18:48.563 }, 00:18:48.563 { 00:18:48.563 "name": "BaseBdev3", 00:18:48.563 "uuid": "67ac87c9-da5c-51d5-818b-3ba6967f030b", 00:18:48.563 "is_configured": true, 00:18:48.563 "data_offset": 2048, 00:18:48.563 "data_size": 63488 00:18:48.563 }, 00:18:48.563 { 00:18:48.563 "name": "BaseBdev4", 00:18:48.563 "uuid": "2554fd20-836b-5f95-a7bf-cfec6136dacc", 00:18:48.563 "is_configured": true, 00:18:48.563 "data_offset": 2048, 00:18:48.563 "data_size": 63488 00:18:48.563 } 00:18:48.563 ] 00:18:48.563 }' 00:18:48.563 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.563 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:48.563 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.822 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:48.822 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86043 00:18:48.822 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 86043 ']' 00:18:48.822 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 86043 00:18:48.822 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:48.822 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.822 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86043 00:18:48.822 killing process with pid 86043 00:18:48.822 Received shutdown signal, test time was about 60.000000 seconds 00:18:48.822 00:18:48.822 Latency(us) 00:18:48.822 [2024-11-26T19:08:15.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.822 [2024-11-26T19:08:15.445Z] =================================================================================================================== 00:18:48.822 [2024-11-26T19:08:15.445Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:48.822 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:48.822 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:48.822 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86043' 00:18:48.822 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 86043 00:18:48.822 [2024-11-26 19:08:15.254610] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:48.822 19:08:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 86043 00:18:48.822 [2024-11-26 19:08:15.254803] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:48.822 [2024-11-26 19:08:15.254948] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:48.822 [2024-11-26 19:08:15.254975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:49.390 [2024-11-26 19:08:15.768757] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:50.327 19:08:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:50.327 00:18:50.327 real 0m29.448s 00:18:50.327 user 0m38.253s 00:18:50.327 sys 0m3.184s 00:18:50.327 19:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:50.327 19:08:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.327 ************************************ 00:18:50.327 END TEST raid5f_rebuild_test_sb 00:18:50.327 ************************************ 00:18:50.586 19:08:16 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:18:50.586 19:08:16 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:18:50.587 19:08:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:50.587 19:08:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:50.587 19:08:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:50.587 ************************************ 00:18:50.587 START TEST raid_state_function_test_sb_4k 00:18:50.587 ************************************ 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:50.587 Process raid pid: 86872 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86872 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86872' 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86872 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86872 ']' 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.587 19:08:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:50.587 [2024-11-26 19:08:17.104701] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:18:50.587 [2024-11-26 19:08:17.105160] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.846 [2024-11-26 19:08:17.295917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.846 [2024-11-26 19:08:17.450078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.105 [2024-11-26 19:08:17.678113] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:51.105 [2024-11-26 19:08:17.678176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:51.672 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.672 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:18:51.672 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:51.672 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.672 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:51.672 [2024-11-26 19:08:18.060074] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:51.672 [2024-11-26 19:08:18.060144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:51.672 [2024-11-26 19:08:18.060164] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:51.672 [2024-11-26 19:08:18.060181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:51.672 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.672 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:51.672 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:51.672 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:51.672 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.672 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.672 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:51.672 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.672 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.672 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.672 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.672 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.673 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.673 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.673 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:51.673 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.673 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.673 "name": "Existed_Raid", 00:18:51.673 "uuid": "93a7a102-b03a-4f93-827a-8d3222aed2e5", 00:18:51.673 "strip_size_kb": 0, 00:18:51.673 "state": "configuring", 00:18:51.673 "raid_level": "raid1", 00:18:51.673 "superblock": true, 00:18:51.673 "num_base_bdevs": 2, 00:18:51.673 "num_base_bdevs_discovered": 0, 00:18:51.673 "num_base_bdevs_operational": 2, 00:18:51.673 "base_bdevs_list": [ 00:18:51.673 { 00:18:51.673 "name": "BaseBdev1", 00:18:51.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.673 "is_configured": false, 00:18:51.673 "data_offset": 0, 00:18:51.673 "data_size": 0 00:18:51.673 }, 00:18:51.673 { 00:18:51.673 "name": "BaseBdev2", 00:18:51.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.673 "is_configured": false, 00:18:51.673 "data_offset": 0, 00:18:51.673 "data_size": 0 00:18:51.673 } 00:18:51.673 ] 00:18:51.673 }' 00:18:51.673 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.673 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.241 [2024-11-26 19:08:18.624102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:52.241 [2024-11-26 19:08:18.624306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.241 [2024-11-26 19:08:18.632081] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:52.241 [2024-11-26 19:08:18.632138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:52.241 [2024-11-26 19:08:18.632156] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:52.241 [2024-11-26 19:08:18.632177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.241 [2024-11-26 19:08:18.683703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:52.241 BaseBdev1 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.241 [ 00:18:52.241 { 00:18:52.241 "name": "BaseBdev1", 00:18:52.241 "aliases": [ 00:18:52.241 "ec594197-3ae3-46c5-9783-4e8381a3f4c8" 00:18:52.241 ], 00:18:52.241 "product_name": "Malloc disk", 00:18:52.241 "block_size": 4096, 00:18:52.241 "num_blocks": 8192, 00:18:52.241 "uuid": "ec594197-3ae3-46c5-9783-4e8381a3f4c8", 00:18:52.241 "assigned_rate_limits": { 00:18:52.241 "rw_ios_per_sec": 0, 00:18:52.241 "rw_mbytes_per_sec": 0, 00:18:52.241 "r_mbytes_per_sec": 0, 00:18:52.241 "w_mbytes_per_sec": 0 00:18:52.241 }, 00:18:52.241 "claimed": true, 00:18:52.241 "claim_type": "exclusive_write", 00:18:52.241 "zoned": false, 00:18:52.241 "supported_io_types": { 00:18:52.241 "read": true, 00:18:52.241 "write": true, 00:18:52.241 "unmap": true, 00:18:52.241 "flush": true, 00:18:52.241 "reset": true, 00:18:52.241 "nvme_admin": false, 00:18:52.241 "nvme_io": false, 00:18:52.241 "nvme_io_md": false, 00:18:52.241 "write_zeroes": true, 00:18:52.241 "zcopy": true, 00:18:52.241 "get_zone_info": false, 00:18:52.241 "zone_management": false, 00:18:52.241 "zone_append": false, 00:18:52.241 "compare": false, 00:18:52.241 "compare_and_write": false, 00:18:52.241 "abort": true, 00:18:52.241 "seek_hole": false, 00:18:52.241 "seek_data": false, 00:18:52.241 "copy": true, 00:18:52.241 "nvme_iov_md": false 00:18:52.241 }, 00:18:52.241 "memory_domains": [ 00:18:52.241 { 00:18:52.241 "dma_device_id": "system", 00:18:52.241 "dma_device_type": 1 00:18:52.241 }, 00:18:52.241 { 00:18:52.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.241 "dma_device_type": 2 00:18:52.241 } 00:18:52.241 ], 00:18:52.241 "driver_specific": {} 00:18:52.241 } 00:18:52.241 ] 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.241 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.241 "name": "Existed_Raid", 00:18:52.241 "uuid": "987f662e-db8f-4423-89c5-4efc435ed37d", 00:18:52.241 "strip_size_kb": 0, 00:18:52.241 "state": "configuring", 00:18:52.241 "raid_level": "raid1", 00:18:52.241 "superblock": true, 00:18:52.241 "num_base_bdevs": 2, 00:18:52.242 "num_base_bdevs_discovered": 1, 00:18:52.242 "num_base_bdevs_operational": 2, 00:18:52.242 "base_bdevs_list": [ 00:18:52.242 { 00:18:52.242 "name": "BaseBdev1", 00:18:52.242 "uuid": "ec594197-3ae3-46c5-9783-4e8381a3f4c8", 00:18:52.242 "is_configured": true, 00:18:52.242 "data_offset": 256, 00:18:52.242 "data_size": 7936 00:18:52.242 }, 00:18:52.242 { 00:18:52.242 "name": "BaseBdev2", 00:18:52.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.242 "is_configured": false, 00:18:52.242 "data_offset": 0, 00:18:52.242 "data_size": 0 00:18:52.242 } 00:18:52.242 ] 00:18:52.242 }' 00:18:52.242 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.242 19:08:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.818 [2024-11-26 19:08:19.231944] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:52.818 [2024-11-26 19:08:19.232015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.818 [2024-11-26 19:08:19.239999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:52.818 [2024-11-26 19:08:19.242765] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:52.818 [2024-11-26 19:08:19.242937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.818 "name": "Existed_Raid", 00:18:52.818 "uuid": "8b77f911-14f3-41ba-b91a-6e6ce9d08c3a", 00:18:52.818 "strip_size_kb": 0, 00:18:52.818 "state": "configuring", 00:18:52.818 "raid_level": "raid1", 00:18:52.818 "superblock": true, 00:18:52.818 "num_base_bdevs": 2, 00:18:52.818 "num_base_bdevs_discovered": 1, 00:18:52.818 "num_base_bdevs_operational": 2, 00:18:52.818 "base_bdevs_list": [ 00:18:52.818 { 00:18:52.818 "name": "BaseBdev1", 00:18:52.818 "uuid": "ec594197-3ae3-46c5-9783-4e8381a3f4c8", 00:18:52.818 "is_configured": true, 00:18:52.818 "data_offset": 256, 00:18:52.818 "data_size": 7936 00:18:52.818 }, 00:18:52.818 { 00:18:52.818 "name": "BaseBdev2", 00:18:52.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.818 "is_configured": false, 00:18:52.818 "data_offset": 0, 00:18:52.818 "data_size": 0 00:18:52.818 } 00:18:52.818 ] 00:18:52.818 }' 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.818 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.388 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:18:53.388 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.388 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.388 [2024-11-26 19:08:19.790536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:53.388 [2024-11-26 19:08:19.790912] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:53.388 [2024-11-26 19:08:19.790933] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:53.388 BaseBdev2 00:18:53.388 [2024-11-26 19:08:19.791370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:53.388 [2024-11-26 19:08:19.791593] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:53.388 [2024-11-26 19:08:19.791624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:53.388 [2024-11-26 19:08:19.791810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.388 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.388 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:53.388 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:53.388 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:53.388 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:18:53.388 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:53.388 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:53.388 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:53.388 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.388 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.388 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.388 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:53.388 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.388 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.388 [ 00:18:53.388 { 00:18:53.388 "name": "BaseBdev2", 00:18:53.388 "aliases": [ 00:18:53.388 "ca776483-0b06-4c7b-a675-5212a963009a" 00:18:53.388 ], 00:18:53.388 "product_name": "Malloc disk", 00:18:53.388 "block_size": 4096, 00:18:53.388 "num_blocks": 8192, 00:18:53.388 "uuid": "ca776483-0b06-4c7b-a675-5212a963009a", 00:18:53.388 "assigned_rate_limits": { 00:18:53.388 "rw_ios_per_sec": 0, 00:18:53.388 "rw_mbytes_per_sec": 0, 00:18:53.388 "r_mbytes_per_sec": 0, 00:18:53.388 "w_mbytes_per_sec": 0 00:18:53.388 }, 00:18:53.388 "claimed": true, 00:18:53.388 "claim_type": "exclusive_write", 00:18:53.388 "zoned": false, 00:18:53.388 "supported_io_types": { 00:18:53.388 "read": true, 00:18:53.388 "write": true, 00:18:53.388 "unmap": true, 00:18:53.388 "flush": true, 00:18:53.388 "reset": true, 00:18:53.388 "nvme_admin": false, 00:18:53.388 "nvme_io": false, 00:18:53.388 "nvme_io_md": false, 00:18:53.388 "write_zeroes": true, 00:18:53.388 "zcopy": true, 00:18:53.388 "get_zone_info": false, 00:18:53.388 "zone_management": false, 00:18:53.388 "zone_append": false, 00:18:53.388 "compare": false, 00:18:53.388 "compare_and_write": false, 00:18:53.388 "abort": true, 00:18:53.388 "seek_hole": false, 00:18:53.388 "seek_data": false, 00:18:53.388 "copy": true, 00:18:53.388 "nvme_iov_md": false 00:18:53.388 }, 00:18:53.388 "memory_domains": [ 00:18:53.388 { 00:18:53.388 "dma_device_id": "system", 00:18:53.388 "dma_device_type": 1 00:18:53.388 }, 00:18:53.388 { 00:18:53.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.388 "dma_device_type": 2 00:18:53.388 } 00:18:53.388 ], 00:18:53.388 "driver_specific": {} 00:18:53.388 } 00:18:53.388 ] 00:18:53.388 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.388 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:18:53.388 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:53.388 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:53.388 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:53.389 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:53.389 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.389 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:53.389 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:53.389 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:53.389 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.389 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.389 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.389 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.389 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.389 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.389 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.389 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.389 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.389 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.389 "name": "Existed_Raid", 00:18:53.389 "uuid": "8b77f911-14f3-41ba-b91a-6e6ce9d08c3a", 00:18:53.389 "strip_size_kb": 0, 00:18:53.389 "state": "online", 00:18:53.389 "raid_level": "raid1", 00:18:53.389 "superblock": true, 00:18:53.389 "num_base_bdevs": 2, 00:18:53.389 "num_base_bdevs_discovered": 2, 00:18:53.389 "num_base_bdevs_operational": 2, 00:18:53.389 "base_bdevs_list": [ 00:18:53.389 { 00:18:53.389 "name": "BaseBdev1", 00:18:53.389 "uuid": "ec594197-3ae3-46c5-9783-4e8381a3f4c8", 00:18:53.389 "is_configured": true, 00:18:53.389 "data_offset": 256, 00:18:53.389 "data_size": 7936 00:18:53.389 }, 00:18:53.389 { 00:18:53.389 "name": "BaseBdev2", 00:18:53.389 "uuid": "ca776483-0b06-4c7b-a675-5212a963009a", 00:18:53.389 "is_configured": true, 00:18:53.389 "data_offset": 256, 00:18:53.389 "data_size": 7936 00:18:53.389 } 00:18:53.389 ] 00:18:53.389 }' 00:18:53.389 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.389 19:08:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.956 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:53.956 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:53.956 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:53.956 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:53.956 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:53.956 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:53.956 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:53.956 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:53.956 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.956 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.956 [2024-11-26 19:08:20.287074] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:53.956 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.956 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:53.956 "name": "Existed_Raid", 00:18:53.956 "aliases": [ 00:18:53.956 "8b77f911-14f3-41ba-b91a-6e6ce9d08c3a" 00:18:53.956 ], 00:18:53.956 "product_name": "Raid Volume", 00:18:53.956 "block_size": 4096, 00:18:53.956 "num_blocks": 7936, 00:18:53.956 "uuid": "8b77f911-14f3-41ba-b91a-6e6ce9d08c3a", 00:18:53.956 "assigned_rate_limits": { 00:18:53.956 "rw_ios_per_sec": 0, 00:18:53.956 "rw_mbytes_per_sec": 0, 00:18:53.956 "r_mbytes_per_sec": 0, 00:18:53.956 "w_mbytes_per_sec": 0 00:18:53.956 }, 00:18:53.956 "claimed": false, 00:18:53.956 "zoned": false, 00:18:53.956 "supported_io_types": { 00:18:53.956 "read": true, 00:18:53.956 "write": true, 00:18:53.956 "unmap": false, 00:18:53.956 "flush": false, 00:18:53.956 "reset": true, 00:18:53.956 "nvme_admin": false, 00:18:53.956 "nvme_io": false, 00:18:53.956 "nvme_io_md": false, 00:18:53.956 "write_zeroes": true, 00:18:53.956 "zcopy": false, 00:18:53.956 "get_zone_info": false, 00:18:53.956 "zone_management": false, 00:18:53.956 "zone_append": false, 00:18:53.956 "compare": false, 00:18:53.956 "compare_and_write": false, 00:18:53.956 "abort": false, 00:18:53.956 "seek_hole": false, 00:18:53.956 "seek_data": false, 00:18:53.956 "copy": false, 00:18:53.956 "nvme_iov_md": false 00:18:53.956 }, 00:18:53.956 "memory_domains": [ 00:18:53.956 { 00:18:53.956 "dma_device_id": "system", 00:18:53.956 "dma_device_type": 1 00:18:53.956 }, 00:18:53.956 { 00:18:53.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.956 "dma_device_type": 2 00:18:53.956 }, 00:18:53.956 { 00:18:53.956 "dma_device_id": "system", 00:18:53.956 "dma_device_type": 1 00:18:53.956 }, 00:18:53.956 { 00:18:53.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.956 "dma_device_type": 2 00:18:53.956 } 00:18:53.956 ], 00:18:53.956 "driver_specific": { 00:18:53.956 "raid": { 00:18:53.956 "uuid": "8b77f911-14f3-41ba-b91a-6e6ce9d08c3a", 00:18:53.956 "strip_size_kb": 0, 00:18:53.956 "state": "online", 00:18:53.956 "raid_level": "raid1", 00:18:53.956 "superblock": true, 00:18:53.956 "num_base_bdevs": 2, 00:18:53.956 "num_base_bdevs_discovered": 2, 00:18:53.956 "num_base_bdevs_operational": 2, 00:18:53.956 "base_bdevs_list": [ 00:18:53.956 { 00:18:53.956 "name": "BaseBdev1", 00:18:53.956 "uuid": "ec594197-3ae3-46c5-9783-4e8381a3f4c8", 00:18:53.956 "is_configured": true, 00:18:53.956 "data_offset": 256, 00:18:53.956 "data_size": 7936 00:18:53.956 }, 00:18:53.956 { 00:18:53.956 "name": "BaseBdev2", 00:18:53.956 "uuid": "ca776483-0b06-4c7b-a675-5212a963009a", 00:18:53.956 "is_configured": true, 00:18:53.956 "data_offset": 256, 00:18:53.956 "data_size": 7936 00:18:53.956 } 00:18:53.956 ] 00:18:53.956 } 00:18:53.956 } 00:18:53.956 }' 00:18:53.956 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:53.956 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:53.956 BaseBdev2' 00:18:53.956 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.956 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:53.956 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:53.957 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:53.957 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.957 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.957 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.957 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.957 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:53.957 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:53.957 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:53.957 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:53.957 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:53.957 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.957 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.957 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.957 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:53.957 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:53.957 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:53.957 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.957 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:53.957 [2024-11-26 19:08:20.526847] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:54.215 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.215 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:54.215 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:54.215 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:54.215 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:54.215 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:54.215 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:54.215 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:54.215 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.215 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.215 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.215 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:54.215 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.215 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.215 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.215 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.216 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.216 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.216 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.216 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:54.216 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.216 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.216 "name": "Existed_Raid", 00:18:54.216 "uuid": "8b77f911-14f3-41ba-b91a-6e6ce9d08c3a", 00:18:54.216 "strip_size_kb": 0, 00:18:54.216 "state": "online", 00:18:54.216 "raid_level": "raid1", 00:18:54.216 "superblock": true, 00:18:54.216 "num_base_bdevs": 2, 00:18:54.216 "num_base_bdevs_discovered": 1, 00:18:54.216 "num_base_bdevs_operational": 1, 00:18:54.216 "base_bdevs_list": [ 00:18:54.216 { 00:18:54.216 "name": null, 00:18:54.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.216 "is_configured": false, 00:18:54.216 "data_offset": 0, 00:18:54.216 "data_size": 7936 00:18:54.216 }, 00:18:54.216 { 00:18:54.216 "name": "BaseBdev2", 00:18:54.216 "uuid": "ca776483-0b06-4c7b-a675-5212a963009a", 00:18:54.216 "is_configured": true, 00:18:54.216 "data_offset": 256, 00:18:54.216 "data_size": 7936 00:18:54.216 } 00:18:54.216 ] 00:18:54.216 }' 00:18:54.216 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.216 19:08:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:54.474 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:54.474 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:54.474 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.474 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.474 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:54.474 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:54.474 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:54.733 [2024-11-26 19:08:21.120110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:54.733 [2024-11-26 19:08:21.120261] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:54.733 [2024-11-26 19:08:21.214371] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:54.733 [2024-11-26 19:08:21.214701] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:54.733 [2024-11-26 19:08:21.214868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86872 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86872 ']' 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86872 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86872 00:18:54.733 killing process with pid 86872 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86872' 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86872 00:18:54.733 [2024-11-26 19:08:21.302588] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:54.733 19:08:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86872 00:18:54.733 [2024-11-26 19:08:21.318095] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:56.110 ************************************ 00:18:56.110 END TEST raid_state_function_test_sb_4k 00:18:56.110 ************************************ 00:18:56.110 19:08:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:18:56.110 00:18:56.110 real 0m5.490s 00:18:56.110 user 0m8.081s 00:18:56.110 sys 0m0.871s 00:18:56.110 19:08:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.110 19:08:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.110 19:08:22 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:18:56.110 19:08:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:56.110 19:08:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.110 19:08:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:56.110 ************************************ 00:18:56.110 START TEST raid_superblock_test_4k 00:18:56.110 ************************************ 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=87124 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:56.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 87124 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 87124 ']' 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.110 19:08:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.110 [2024-11-26 19:08:22.658737] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:18:56.110 [2024-11-26 19:08:22.658938] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87124 ] 00:18:56.368 [2024-11-26 19:08:22.833282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.368 [2024-11-26 19:08:22.978795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.627 [2024-11-26 19:08:23.201200] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:56.627 [2024-11-26 19:08:23.201270] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.195 malloc1 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.195 [2024-11-26 19:08:23.660733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:57.195 [2024-11-26 19:08:23.660820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.195 [2024-11-26 19:08:23.660856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:57.195 [2024-11-26 19:08:23.660872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.195 [2024-11-26 19:08:23.663865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.195 [2024-11-26 19:08:23.664048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:57.195 pt1 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:57.195 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.196 malloc2 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.196 [2024-11-26 19:08:23.720240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:57.196 [2024-11-26 19:08:23.720470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.196 [2024-11-26 19:08:23.720555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:57.196 [2024-11-26 19:08:23.720698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.196 [2024-11-26 19:08:23.723718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.196 [2024-11-26 19:08:23.723866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:57.196 pt2 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.196 [2024-11-26 19:08:23.732427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:57.196 [2024-11-26 19:08:23.735087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:57.196 [2024-11-26 19:08:23.735509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:57.196 [2024-11-26 19:08:23.735539] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:57.196 [2024-11-26 19:08:23.735925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:57.196 [2024-11-26 19:08:23.736154] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:57.196 [2024-11-26 19:08:23.736180] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:57.196 [2024-11-26 19:08:23.736483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.196 "name": "raid_bdev1", 00:18:57.196 "uuid": "63dd7aaf-9b6d-4b6b-bac8-8e998058135e", 00:18:57.196 "strip_size_kb": 0, 00:18:57.196 "state": "online", 00:18:57.196 "raid_level": "raid1", 00:18:57.196 "superblock": true, 00:18:57.196 "num_base_bdevs": 2, 00:18:57.196 "num_base_bdevs_discovered": 2, 00:18:57.196 "num_base_bdevs_operational": 2, 00:18:57.196 "base_bdevs_list": [ 00:18:57.196 { 00:18:57.196 "name": "pt1", 00:18:57.196 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:57.196 "is_configured": true, 00:18:57.196 "data_offset": 256, 00:18:57.196 "data_size": 7936 00:18:57.196 }, 00:18:57.196 { 00:18:57.196 "name": "pt2", 00:18:57.196 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:57.196 "is_configured": true, 00:18:57.196 "data_offset": 256, 00:18:57.196 "data_size": 7936 00:18:57.196 } 00:18:57.196 ] 00:18:57.196 }' 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.196 19:08:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.764 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:57.764 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:57.764 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:57.764 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:57.764 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:57.764 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:57.764 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:57.764 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.764 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.764 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:57.764 [2024-11-26 19:08:24.268933] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:57.764 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.764 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:57.764 "name": "raid_bdev1", 00:18:57.764 "aliases": [ 00:18:57.764 "63dd7aaf-9b6d-4b6b-bac8-8e998058135e" 00:18:57.764 ], 00:18:57.764 "product_name": "Raid Volume", 00:18:57.764 "block_size": 4096, 00:18:57.764 "num_blocks": 7936, 00:18:57.764 "uuid": "63dd7aaf-9b6d-4b6b-bac8-8e998058135e", 00:18:57.764 "assigned_rate_limits": { 00:18:57.764 "rw_ios_per_sec": 0, 00:18:57.764 "rw_mbytes_per_sec": 0, 00:18:57.764 "r_mbytes_per_sec": 0, 00:18:57.764 "w_mbytes_per_sec": 0 00:18:57.764 }, 00:18:57.764 "claimed": false, 00:18:57.764 "zoned": false, 00:18:57.764 "supported_io_types": { 00:18:57.764 "read": true, 00:18:57.764 "write": true, 00:18:57.764 "unmap": false, 00:18:57.764 "flush": false, 00:18:57.764 "reset": true, 00:18:57.764 "nvme_admin": false, 00:18:57.764 "nvme_io": false, 00:18:57.764 "nvme_io_md": false, 00:18:57.764 "write_zeroes": true, 00:18:57.764 "zcopy": false, 00:18:57.764 "get_zone_info": false, 00:18:57.764 "zone_management": false, 00:18:57.764 "zone_append": false, 00:18:57.764 "compare": false, 00:18:57.764 "compare_and_write": false, 00:18:57.764 "abort": false, 00:18:57.764 "seek_hole": false, 00:18:57.764 "seek_data": false, 00:18:57.764 "copy": false, 00:18:57.764 "nvme_iov_md": false 00:18:57.764 }, 00:18:57.764 "memory_domains": [ 00:18:57.764 { 00:18:57.764 "dma_device_id": "system", 00:18:57.764 "dma_device_type": 1 00:18:57.764 }, 00:18:57.764 { 00:18:57.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.764 "dma_device_type": 2 00:18:57.764 }, 00:18:57.764 { 00:18:57.764 "dma_device_id": "system", 00:18:57.764 "dma_device_type": 1 00:18:57.764 }, 00:18:57.764 { 00:18:57.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.764 "dma_device_type": 2 00:18:57.764 } 00:18:57.764 ], 00:18:57.764 "driver_specific": { 00:18:57.764 "raid": { 00:18:57.764 "uuid": "63dd7aaf-9b6d-4b6b-bac8-8e998058135e", 00:18:57.764 "strip_size_kb": 0, 00:18:57.764 "state": "online", 00:18:57.764 "raid_level": "raid1", 00:18:57.764 "superblock": true, 00:18:57.764 "num_base_bdevs": 2, 00:18:57.764 "num_base_bdevs_discovered": 2, 00:18:57.764 "num_base_bdevs_operational": 2, 00:18:57.764 "base_bdevs_list": [ 00:18:57.764 { 00:18:57.764 "name": "pt1", 00:18:57.764 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:57.764 "is_configured": true, 00:18:57.764 "data_offset": 256, 00:18:57.764 "data_size": 7936 00:18:57.764 }, 00:18:57.764 { 00:18:57.764 "name": "pt2", 00:18:57.764 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:57.764 "is_configured": true, 00:18:57.764 "data_offset": 256, 00:18:57.764 "data_size": 7936 00:18:57.764 } 00:18:57.764 ] 00:18:57.764 } 00:18:57.764 } 00:18:57.764 }' 00:18:57.764 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:57.764 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:57.764 pt2' 00:18:57.764 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.023 [2024-11-26 19:08:24.545055] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=63dd7aaf-9b6d-4b6b-bac8-8e998058135e 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 63dd7aaf-9b6d-4b6b-bac8-8e998058135e ']' 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.023 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.023 [2024-11-26 19:08:24.604644] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:58.023 [2024-11-26 19:08:24.604942] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:58.023 [2024-11-26 19:08:24.605099] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:58.024 [2024-11-26 19:08:24.605192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:58.024 [2024-11-26 19:08:24.605215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:58.024 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.024 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.024 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.024 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:58.024 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.024 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.283 [2024-11-26 19:08:24.736728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:58.283 [2024-11-26 19:08:24.739548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:58.283 [2024-11-26 19:08:24.739831] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:58.283 [2024-11-26 19:08:24.739934] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:58.283 [2024-11-26 19:08:24.739963] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:58.283 [2024-11-26 19:08:24.739980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:58.283 request: 00:18:58.283 { 00:18:58.283 "name": "raid_bdev1", 00:18:58.283 "raid_level": "raid1", 00:18:58.283 "base_bdevs": [ 00:18:58.283 "malloc1", 00:18:58.283 "malloc2" 00:18:58.283 ], 00:18:58.283 "superblock": false, 00:18:58.283 "method": "bdev_raid_create", 00:18:58.283 "req_id": 1 00:18:58.283 } 00:18:58.283 Got JSON-RPC error response 00:18:58.283 response: 00:18:58.283 { 00:18:58.283 "code": -17, 00:18:58.283 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:58.283 } 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.283 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.283 [2024-11-26 19:08:24.804812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:58.283 [2024-11-26 19:08:24.804940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.283 [2024-11-26 19:08:24.804975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:58.284 [2024-11-26 19:08:24.804994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.284 [2024-11-26 19:08:24.808002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.284 [2024-11-26 19:08:24.808053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:58.284 [2024-11-26 19:08:24.808174] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:58.284 [2024-11-26 19:08:24.808256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:58.284 pt1 00:18:58.284 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.284 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:58.284 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.284 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.284 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.284 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.284 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:58.284 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.284 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.284 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.284 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.284 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.284 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.284 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.284 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.284 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.284 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.284 "name": "raid_bdev1", 00:18:58.284 "uuid": "63dd7aaf-9b6d-4b6b-bac8-8e998058135e", 00:18:58.284 "strip_size_kb": 0, 00:18:58.284 "state": "configuring", 00:18:58.284 "raid_level": "raid1", 00:18:58.284 "superblock": true, 00:18:58.284 "num_base_bdevs": 2, 00:18:58.284 "num_base_bdevs_discovered": 1, 00:18:58.284 "num_base_bdevs_operational": 2, 00:18:58.284 "base_bdevs_list": [ 00:18:58.284 { 00:18:58.284 "name": "pt1", 00:18:58.284 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:58.284 "is_configured": true, 00:18:58.284 "data_offset": 256, 00:18:58.284 "data_size": 7936 00:18:58.284 }, 00:18:58.284 { 00:18:58.284 "name": null, 00:18:58.284 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:58.284 "is_configured": false, 00:18:58.284 "data_offset": 256, 00:18:58.284 "data_size": 7936 00:18:58.284 } 00:18:58.284 ] 00:18:58.284 }' 00:18:58.284 19:08:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.284 19:08:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.850 [2024-11-26 19:08:25.336949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:58.850 [2024-11-26 19:08:25.337250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.850 [2024-11-26 19:08:25.337485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:58.850 [2024-11-26 19:08:25.337540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.850 [2024-11-26 19:08:25.338461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.850 [2024-11-26 19:08:25.338522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:58.850 [2024-11-26 19:08:25.338706] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:58.850 [2024-11-26 19:08:25.338777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:58.850 [2024-11-26 19:08:25.339043] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:58.850 [2024-11-26 19:08:25.339083] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:58.850 [2024-11-26 19:08:25.339528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:58.850 [2024-11-26 19:08:25.339756] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:58.850 [2024-11-26 19:08:25.339773] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:58.850 [2024-11-26 19:08:25.339971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.850 pt2 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.850 19:08:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.851 19:08:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.851 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.851 "name": "raid_bdev1", 00:18:58.851 "uuid": "63dd7aaf-9b6d-4b6b-bac8-8e998058135e", 00:18:58.851 "strip_size_kb": 0, 00:18:58.851 "state": "online", 00:18:58.851 "raid_level": "raid1", 00:18:58.851 "superblock": true, 00:18:58.851 "num_base_bdevs": 2, 00:18:58.851 "num_base_bdevs_discovered": 2, 00:18:58.851 "num_base_bdevs_operational": 2, 00:18:58.851 "base_bdevs_list": [ 00:18:58.851 { 00:18:58.851 "name": "pt1", 00:18:58.851 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:58.851 "is_configured": true, 00:18:58.851 "data_offset": 256, 00:18:58.851 "data_size": 7936 00:18:58.851 }, 00:18:58.851 { 00:18:58.851 "name": "pt2", 00:18:58.851 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:58.851 "is_configured": true, 00:18:58.851 "data_offset": 256, 00:18:58.851 "data_size": 7936 00:18:58.851 } 00:18:58.851 ] 00:18:58.851 }' 00:18:58.851 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.851 19:08:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.418 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:59.418 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:59.418 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:59.418 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:59.418 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:59.418 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:59.418 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:59.418 19:08:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.418 19:08:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.418 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:59.418 [2024-11-26 19:08:25.869406] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.418 19:08:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.418 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:59.418 "name": "raid_bdev1", 00:18:59.418 "aliases": [ 00:18:59.418 "63dd7aaf-9b6d-4b6b-bac8-8e998058135e" 00:18:59.418 ], 00:18:59.418 "product_name": "Raid Volume", 00:18:59.418 "block_size": 4096, 00:18:59.418 "num_blocks": 7936, 00:18:59.418 "uuid": "63dd7aaf-9b6d-4b6b-bac8-8e998058135e", 00:18:59.418 "assigned_rate_limits": { 00:18:59.418 "rw_ios_per_sec": 0, 00:18:59.418 "rw_mbytes_per_sec": 0, 00:18:59.418 "r_mbytes_per_sec": 0, 00:18:59.418 "w_mbytes_per_sec": 0 00:18:59.418 }, 00:18:59.418 "claimed": false, 00:18:59.418 "zoned": false, 00:18:59.418 "supported_io_types": { 00:18:59.418 "read": true, 00:18:59.418 "write": true, 00:18:59.418 "unmap": false, 00:18:59.418 "flush": false, 00:18:59.418 "reset": true, 00:18:59.418 "nvme_admin": false, 00:18:59.418 "nvme_io": false, 00:18:59.418 "nvme_io_md": false, 00:18:59.418 "write_zeroes": true, 00:18:59.418 "zcopy": false, 00:18:59.418 "get_zone_info": false, 00:18:59.418 "zone_management": false, 00:18:59.418 "zone_append": false, 00:18:59.418 "compare": false, 00:18:59.418 "compare_and_write": false, 00:18:59.418 "abort": false, 00:18:59.418 "seek_hole": false, 00:18:59.418 "seek_data": false, 00:18:59.418 "copy": false, 00:18:59.418 "nvme_iov_md": false 00:18:59.418 }, 00:18:59.418 "memory_domains": [ 00:18:59.418 { 00:18:59.418 "dma_device_id": "system", 00:18:59.418 "dma_device_type": 1 00:18:59.418 }, 00:18:59.418 { 00:18:59.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.418 "dma_device_type": 2 00:18:59.418 }, 00:18:59.418 { 00:18:59.418 "dma_device_id": "system", 00:18:59.418 "dma_device_type": 1 00:18:59.418 }, 00:18:59.418 { 00:18:59.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.418 "dma_device_type": 2 00:18:59.418 } 00:18:59.418 ], 00:18:59.418 "driver_specific": { 00:18:59.418 "raid": { 00:18:59.418 "uuid": "63dd7aaf-9b6d-4b6b-bac8-8e998058135e", 00:18:59.418 "strip_size_kb": 0, 00:18:59.418 "state": "online", 00:18:59.418 "raid_level": "raid1", 00:18:59.418 "superblock": true, 00:18:59.418 "num_base_bdevs": 2, 00:18:59.418 "num_base_bdevs_discovered": 2, 00:18:59.418 "num_base_bdevs_operational": 2, 00:18:59.418 "base_bdevs_list": [ 00:18:59.418 { 00:18:59.418 "name": "pt1", 00:18:59.418 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:59.418 "is_configured": true, 00:18:59.418 "data_offset": 256, 00:18:59.418 "data_size": 7936 00:18:59.418 }, 00:18:59.418 { 00:18:59.418 "name": "pt2", 00:18:59.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:59.418 "is_configured": true, 00:18:59.418 "data_offset": 256, 00:18:59.418 "data_size": 7936 00:18:59.418 } 00:18:59.418 ] 00:18:59.418 } 00:18:59.418 } 00:18:59.418 }' 00:18:59.418 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:59.418 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:59.418 pt2' 00:18:59.418 19:08:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.418 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:59.418 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.418 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:59.418 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.418 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.418 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.418 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.676 [2024-11-26 19:08:26.129531] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 63dd7aaf-9b6d-4b6b-bac8-8e998058135e '!=' 63dd7aaf-9b6d-4b6b-bac8-8e998058135e ']' 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.676 [2024-11-26 19:08:26.181246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.676 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.677 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.677 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.677 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.677 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.677 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.677 "name": "raid_bdev1", 00:18:59.677 "uuid": "63dd7aaf-9b6d-4b6b-bac8-8e998058135e", 00:18:59.677 "strip_size_kb": 0, 00:18:59.677 "state": "online", 00:18:59.677 "raid_level": "raid1", 00:18:59.677 "superblock": true, 00:18:59.677 "num_base_bdevs": 2, 00:18:59.677 "num_base_bdevs_discovered": 1, 00:18:59.677 "num_base_bdevs_operational": 1, 00:18:59.677 "base_bdevs_list": [ 00:18:59.677 { 00:18:59.677 "name": null, 00:18:59.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.677 "is_configured": false, 00:18:59.677 "data_offset": 0, 00:18:59.677 "data_size": 7936 00:18:59.677 }, 00:18:59.677 { 00:18:59.677 "name": "pt2", 00:18:59.677 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:59.677 "is_configured": true, 00:18:59.677 "data_offset": 256, 00:18:59.677 "data_size": 7936 00:18:59.677 } 00:18:59.677 ] 00:18:59.677 }' 00:18:59.677 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.677 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.244 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:00.244 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.244 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.244 [2024-11-26 19:08:26.653317] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:00.244 [2024-11-26 19:08:26.653355] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:00.244 [2024-11-26 19:08:26.653477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:00.244 [2024-11-26 19:08:26.653551] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:00.244 [2024-11-26 19:08:26.653571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:00.244 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.244 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.244 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.245 [2024-11-26 19:08:26.721298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:00.245 [2024-11-26 19:08:26.721392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.245 [2024-11-26 19:08:26.721420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:00.245 [2024-11-26 19:08:26.721438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.245 [2024-11-26 19:08:26.724606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.245 [2024-11-26 19:08:26.724660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:00.245 [2024-11-26 19:08:26.724782] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:00.245 [2024-11-26 19:08:26.724869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:00.245 [2024-11-26 19:08:26.725021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:00.245 [2024-11-26 19:08:26.725044] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:00.245 [2024-11-26 19:08:26.725381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:00.245 [2024-11-26 19:08:26.725592] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:00.245 [2024-11-26 19:08:26.725608] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:00.245 pt2 00:19:00.245 [2024-11-26 19:08:26.725855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.245 "name": "raid_bdev1", 00:19:00.245 "uuid": "63dd7aaf-9b6d-4b6b-bac8-8e998058135e", 00:19:00.245 "strip_size_kb": 0, 00:19:00.245 "state": "online", 00:19:00.245 "raid_level": "raid1", 00:19:00.245 "superblock": true, 00:19:00.245 "num_base_bdevs": 2, 00:19:00.245 "num_base_bdevs_discovered": 1, 00:19:00.245 "num_base_bdevs_operational": 1, 00:19:00.245 "base_bdevs_list": [ 00:19:00.245 { 00:19:00.245 "name": null, 00:19:00.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.245 "is_configured": false, 00:19:00.245 "data_offset": 256, 00:19:00.245 "data_size": 7936 00:19:00.245 }, 00:19:00.245 { 00:19:00.245 "name": "pt2", 00:19:00.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:00.245 "is_configured": true, 00:19:00.245 "data_offset": 256, 00:19:00.245 "data_size": 7936 00:19:00.245 } 00:19:00.245 ] 00:19:00.245 }' 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.245 19:08:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.812 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:00.812 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.812 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.812 [2024-11-26 19:08:27.221908] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:00.812 [2024-11-26 19:08:27.221951] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:00.812 [2024-11-26 19:08:27.222069] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:00.812 [2024-11-26 19:08:27.222148] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:00.812 [2024-11-26 19:08:27.222165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:00.812 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.812 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:00.812 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.812 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.812 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.812 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.812 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:00.812 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:00.812 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:00.812 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:00.812 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.812 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.813 [2024-11-26 19:08:27.285952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:00.813 [2024-11-26 19:08:27.286040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.813 [2024-11-26 19:08:27.286076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:00.813 [2024-11-26 19:08:27.286092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.813 [2024-11-26 19:08:27.289329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.813 [2024-11-26 19:08:27.289377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:00.813 [2024-11-26 19:08:27.289506] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:00.813 [2024-11-26 19:08:27.289574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:00.813 [2024-11-26 19:08:27.289790] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:00.813 [2024-11-26 19:08:27.289811] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:00.813 [2024-11-26 19:08:27.289834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:00.813 [2024-11-26 19:08:27.289907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:00.813 [2024-11-26 19:08:27.290028] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:00.813 [2024-11-26 19:08:27.290044] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:00.813 [2024-11-26 19:08:27.290403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:00.813 [2024-11-26 19:08:27.290603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:00.813 [2024-11-26 19:08:27.290625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:00.813 pt1 00:19:00.813 [2024-11-26 19:08:27.290876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.813 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.813 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:00.813 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:00.813 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.813 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.813 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.813 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.813 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:00.813 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.813 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.813 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.813 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.813 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.813 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.813 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.813 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.813 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.813 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.813 "name": "raid_bdev1", 00:19:00.813 "uuid": "63dd7aaf-9b6d-4b6b-bac8-8e998058135e", 00:19:00.813 "strip_size_kb": 0, 00:19:00.813 "state": "online", 00:19:00.813 "raid_level": "raid1", 00:19:00.813 "superblock": true, 00:19:00.813 "num_base_bdevs": 2, 00:19:00.813 "num_base_bdevs_discovered": 1, 00:19:00.813 "num_base_bdevs_operational": 1, 00:19:00.813 "base_bdevs_list": [ 00:19:00.813 { 00:19:00.813 "name": null, 00:19:00.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.813 "is_configured": false, 00:19:00.813 "data_offset": 256, 00:19:00.813 "data_size": 7936 00:19:00.813 }, 00:19:00.813 { 00:19:00.813 "name": "pt2", 00:19:00.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:00.813 "is_configured": true, 00:19:00.813 "data_offset": 256, 00:19:00.813 "data_size": 7936 00:19:00.813 } 00:19:00.813 ] 00:19:00.813 }' 00:19:00.813 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.813 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.380 [2024-11-26 19:08:27.858613] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 63dd7aaf-9b6d-4b6b-bac8-8e998058135e '!=' 63dd7aaf-9b6d-4b6b-bac8-8e998058135e ']' 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 87124 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 87124 ']' 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 87124 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87124 00:19:01.380 killing process with pid 87124 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87124' 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 87124 00:19:01.380 [2024-11-26 19:08:27.939983] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:01.380 19:08:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 87124 00:19:01.380 [2024-11-26 19:08:27.940113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:01.380 [2024-11-26 19:08:27.940194] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:01.380 [2024-11-26 19:08:27.940219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:01.638 [2024-11-26 19:08:28.144672] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:03.015 ************************************ 00:19:03.015 END TEST raid_superblock_test_4k 00:19:03.015 ************************************ 00:19:03.015 19:08:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:19:03.015 00:19:03.015 real 0m6.777s 00:19:03.015 user 0m10.521s 00:19:03.015 sys 0m1.067s 00:19:03.015 19:08:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.015 19:08:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.015 19:08:29 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:19:03.015 19:08:29 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:19:03.015 19:08:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:03.015 19:08:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.015 19:08:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:03.015 ************************************ 00:19:03.015 START TEST raid_rebuild_test_sb_4k 00:19:03.015 ************************************ 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:03.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87453 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87453 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 87453 ']' 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.015 19:08:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.015 [2024-11-26 19:08:29.490237] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:19:03.015 [2024-11-26 19:08:29.490671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87453 ] 00:19:03.015 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:03.015 Zero copy mechanism will not be used. 00:19:03.273 [2024-11-26 19:08:29.683544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.273 [2024-11-26 19:08:29.830522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.532 [2024-11-26 19:08:30.055789] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:03.532 [2024-11-26 19:08:30.055857] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:04.100 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.100 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:19:04.100 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:04.100 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:19:04.100 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.100 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.100 BaseBdev1_malloc 00:19:04.100 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.100 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:04.100 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.100 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.100 [2024-11-26 19:08:30.537700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:04.100 [2024-11-26 19:08:30.537783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.100 [2024-11-26 19:08:30.537818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:04.100 [2024-11-26 19:08:30.537839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.100 [2024-11-26 19:08:30.540909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.100 [2024-11-26 19:08:30.540957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:04.100 BaseBdev1 00:19:04.100 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.100 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:04.100 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:19:04.100 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.100 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.100 BaseBdev2_malloc 00:19:04.100 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.101 [2024-11-26 19:08:30.598229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:04.101 [2024-11-26 19:08:30.598334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.101 [2024-11-26 19:08:30.598375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:04.101 [2024-11-26 19:08:30.598395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.101 [2024-11-26 19:08:30.601430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.101 [2024-11-26 19:08:30.601477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:04.101 BaseBdev2 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.101 spare_malloc 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.101 spare_delay 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.101 [2024-11-26 19:08:30.683687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:04.101 [2024-11-26 19:08:30.683765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.101 [2024-11-26 19:08:30.683798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:04.101 [2024-11-26 19:08:30.683817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.101 [2024-11-26 19:08:30.686842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.101 [2024-11-26 19:08:30.686890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:04.101 spare 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.101 [2024-11-26 19:08:30.691843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:04.101 [2024-11-26 19:08:30.694464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:04.101 [2024-11-26 19:08:30.694728] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:04.101 [2024-11-26 19:08:30.694753] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:04.101 [2024-11-26 19:08:30.695126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:04.101 [2024-11-26 19:08:30.695401] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:04.101 [2024-11-26 19:08:30.695420] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:04.101 [2024-11-26 19:08:30.695638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.101 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.360 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.360 "name": "raid_bdev1", 00:19:04.360 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:04.360 "strip_size_kb": 0, 00:19:04.360 "state": "online", 00:19:04.360 "raid_level": "raid1", 00:19:04.360 "superblock": true, 00:19:04.360 "num_base_bdevs": 2, 00:19:04.360 "num_base_bdevs_discovered": 2, 00:19:04.360 "num_base_bdevs_operational": 2, 00:19:04.360 "base_bdevs_list": [ 00:19:04.360 { 00:19:04.360 "name": "BaseBdev1", 00:19:04.360 "uuid": "31ce262c-039c-5564-abc0-c4a307c741aa", 00:19:04.360 "is_configured": true, 00:19:04.360 "data_offset": 256, 00:19:04.360 "data_size": 7936 00:19:04.360 }, 00:19:04.360 { 00:19:04.360 "name": "BaseBdev2", 00:19:04.360 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:04.360 "is_configured": true, 00:19:04.360 "data_offset": 256, 00:19:04.360 "data_size": 7936 00:19:04.360 } 00:19:04.360 ] 00:19:04.360 }' 00:19:04.360 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.360 19:08:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.927 [2024-11-26 19:08:31.248392] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:04.927 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:05.185 [2024-11-26 19:08:31.624219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:05.185 /dev/nbd0 00:19:05.185 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:05.185 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:05.185 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:05.185 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:05.185 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:05.186 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:05.186 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:05.186 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:05.186 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:05.186 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:05.186 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:05.186 1+0 records in 00:19:05.186 1+0 records out 00:19:05.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375215 s, 10.9 MB/s 00:19:05.186 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:05.186 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:05.186 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:05.186 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:05.186 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:05.186 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:05.186 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:05.186 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:05.186 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:05.186 19:08:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:06.123 7936+0 records in 00:19:06.123 7936+0 records out 00:19:06.123 32505856 bytes (33 MB, 31 MiB) copied, 0.930034 s, 35.0 MB/s 00:19:06.123 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:06.123 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:06.123 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:06.123 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:06.123 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:06.123 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:06.123 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:06.382 [2024-11-26 19:08:32.895183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.382 [2024-11-26 19:08:32.907623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.382 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.382 "name": "raid_bdev1", 00:19:06.382 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:06.382 "strip_size_kb": 0, 00:19:06.382 "state": "online", 00:19:06.382 "raid_level": "raid1", 00:19:06.382 "superblock": true, 00:19:06.382 "num_base_bdevs": 2, 00:19:06.382 "num_base_bdevs_discovered": 1, 00:19:06.382 "num_base_bdevs_operational": 1, 00:19:06.382 "base_bdevs_list": [ 00:19:06.382 { 00:19:06.382 "name": null, 00:19:06.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.382 "is_configured": false, 00:19:06.382 "data_offset": 0, 00:19:06.383 "data_size": 7936 00:19:06.383 }, 00:19:06.383 { 00:19:06.383 "name": "BaseBdev2", 00:19:06.383 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:06.383 "is_configured": true, 00:19:06.383 "data_offset": 256, 00:19:06.383 "data_size": 7936 00:19:06.383 } 00:19:06.383 ] 00:19:06.383 }' 00:19:06.383 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.383 19:08:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.950 19:08:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:06.950 19:08:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.950 19:08:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.950 [2024-11-26 19:08:33.471817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:06.950 [2024-11-26 19:08:33.489857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:06.950 19:08:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.950 19:08:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:06.950 [2024-11-26 19:08:33.492582] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:07.888 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:07.888 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.888 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:07.888 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:07.888 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.888 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.888 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.888 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.888 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.147 "name": "raid_bdev1", 00:19:08.147 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:08.147 "strip_size_kb": 0, 00:19:08.147 "state": "online", 00:19:08.147 "raid_level": "raid1", 00:19:08.147 "superblock": true, 00:19:08.147 "num_base_bdevs": 2, 00:19:08.147 "num_base_bdevs_discovered": 2, 00:19:08.147 "num_base_bdevs_operational": 2, 00:19:08.147 "process": { 00:19:08.147 "type": "rebuild", 00:19:08.147 "target": "spare", 00:19:08.147 "progress": { 00:19:08.147 "blocks": 2304, 00:19:08.147 "percent": 29 00:19:08.147 } 00:19:08.147 }, 00:19:08.147 "base_bdevs_list": [ 00:19:08.147 { 00:19:08.147 "name": "spare", 00:19:08.147 "uuid": "18914fb4-053a-588a-9858-ca6a139fb225", 00:19:08.147 "is_configured": true, 00:19:08.147 "data_offset": 256, 00:19:08.147 "data_size": 7936 00:19:08.147 }, 00:19:08.147 { 00:19:08.147 "name": "BaseBdev2", 00:19:08.147 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:08.147 "is_configured": true, 00:19:08.147 "data_offset": 256, 00:19:08.147 "data_size": 7936 00:19:08.147 } 00:19:08.147 ] 00:19:08.147 }' 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.147 [2024-11-26 19:08:34.654443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:08.147 [2024-11-26 19:08:34.704744] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:08.147 [2024-11-26 19:08:34.704902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.147 [2024-11-26 19:08:34.704938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:08.147 [2024-11-26 19:08:34.704954] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.147 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.406 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.406 "name": "raid_bdev1", 00:19:08.406 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:08.406 "strip_size_kb": 0, 00:19:08.406 "state": "online", 00:19:08.406 "raid_level": "raid1", 00:19:08.406 "superblock": true, 00:19:08.406 "num_base_bdevs": 2, 00:19:08.406 "num_base_bdevs_discovered": 1, 00:19:08.406 "num_base_bdevs_operational": 1, 00:19:08.406 "base_bdevs_list": [ 00:19:08.406 { 00:19:08.406 "name": null, 00:19:08.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.406 "is_configured": false, 00:19:08.406 "data_offset": 0, 00:19:08.406 "data_size": 7936 00:19:08.406 }, 00:19:08.406 { 00:19:08.406 "name": "BaseBdev2", 00:19:08.406 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:08.406 "is_configured": true, 00:19:08.406 "data_offset": 256, 00:19:08.406 "data_size": 7936 00:19:08.406 } 00:19:08.406 ] 00:19:08.406 }' 00:19:08.406 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.406 19:08:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.664 19:08:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:08.664 19:08:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.664 19:08:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:08.664 19:08:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:08.664 19:08:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.664 19:08:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.664 19:08:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.664 19:08:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.664 19:08:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.664 19:08:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.923 19:08:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.923 "name": "raid_bdev1", 00:19:08.923 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:08.923 "strip_size_kb": 0, 00:19:08.923 "state": "online", 00:19:08.923 "raid_level": "raid1", 00:19:08.923 "superblock": true, 00:19:08.923 "num_base_bdevs": 2, 00:19:08.923 "num_base_bdevs_discovered": 1, 00:19:08.923 "num_base_bdevs_operational": 1, 00:19:08.923 "base_bdevs_list": [ 00:19:08.923 { 00:19:08.923 "name": null, 00:19:08.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.923 "is_configured": false, 00:19:08.923 "data_offset": 0, 00:19:08.923 "data_size": 7936 00:19:08.923 }, 00:19:08.923 { 00:19:08.923 "name": "BaseBdev2", 00:19:08.923 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:08.923 "is_configured": true, 00:19:08.923 "data_offset": 256, 00:19:08.923 "data_size": 7936 00:19:08.923 } 00:19:08.923 ] 00:19:08.923 }' 00:19:08.923 19:08:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.923 19:08:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:08.923 19:08:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.923 19:08:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:08.923 19:08:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:08.923 19:08:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.923 19:08:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.923 [2024-11-26 19:08:35.435647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:08.923 [2024-11-26 19:08:35.452769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:08.923 19:08:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.923 19:08:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:08.923 [2024-11-26 19:08:35.455693] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:09.858 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:09.858 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.858 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:09.858 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:09.858 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.858 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.858 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.858 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.858 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.116 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.116 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.116 "name": "raid_bdev1", 00:19:10.117 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:10.117 "strip_size_kb": 0, 00:19:10.117 "state": "online", 00:19:10.117 "raid_level": "raid1", 00:19:10.117 "superblock": true, 00:19:10.117 "num_base_bdevs": 2, 00:19:10.117 "num_base_bdevs_discovered": 2, 00:19:10.117 "num_base_bdevs_operational": 2, 00:19:10.117 "process": { 00:19:10.117 "type": "rebuild", 00:19:10.117 "target": "spare", 00:19:10.117 "progress": { 00:19:10.117 "blocks": 2560, 00:19:10.117 "percent": 32 00:19:10.117 } 00:19:10.117 }, 00:19:10.117 "base_bdevs_list": [ 00:19:10.117 { 00:19:10.117 "name": "spare", 00:19:10.117 "uuid": "18914fb4-053a-588a-9858-ca6a139fb225", 00:19:10.117 "is_configured": true, 00:19:10.117 "data_offset": 256, 00:19:10.117 "data_size": 7936 00:19:10.117 }, 00:19:10.117 { 00:19:10.117 "name": "BaseBdev2", 00:19:10.117 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:10.117 "is_configured": true, 00:19:10.117 "data_offset": 256, 00:19:10.117 "data_size": 7936 00:19:10.117 } 00:19:10.117 ] 00:19:10.117 }' 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:10.117 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=754 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.117 "name": "raid_bdev1", 00:19:10.117 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:10.117 "strip_size_kb": 0, 00:19:10.117 "state": "online", 00:19:10.117 "raid_level": "raid1", 00:19:10.117 "superblock": true, 00:19:10.117 "num_base_bdevs": 2, 00:19:10.117 "num_base_bdevs_discovered": 2, 00:19:10.117 "num_base_bdevs_operational": 2, 00:19:10.117 "process": { 00:19:10.117 "type": "rebuild", 00:19:10.117 "target": "spare", 00:19:10.117 "progress": { 00:19:10.117 "blocks": 2816, 00:19:10.117 "percent": 35 00:19:10.117 } 00:19:10.117 }, 00:19:10.117 "base_bdevs_list": [ 00:19:10.117 { 00:19:10.117 "name": "spare", 00:19:10.117 "uuid": "18914fb4-053a-588a-9858-ca6a139fb225", 00:19:10.117 "is_configured": true, 00:19:10.117 "data_offset": 256, 00:19:10.117 "data_size": 7936 00:19:10.117 }, 00:19:10.117 { 00:19:10.117 "name": "BaseBdev2", 00:19:10.117 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:10.117 "is_configured": true, 00:19:10.117 "data_offset": 256, 00:19:10.117 "data_size": 7936 00:19:10.117 } 00:19:10.117 ] 00:19:10.117 }' 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:10.117 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.376 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:10.376 19:08:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:11.311 19:08:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:11.311 19:08:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.311 19:08:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.311 19:08:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:11.311 19:08:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:11.311 19:08:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.311 19:08:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.311 19:08:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.311 19:08:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.311 19:08:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.311 19:08:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.311 19:08:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.311 "name": "raid_bdev1", 00:19:11.311 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:11.311 "strip_size_kb": 0, 00:19:11.311 "state": "online", 00:19:11.311 "raid_level": "raid1", 00:19:11.311 "superblock": true, 00:19:11.311 "num_base_bdevs": 2, 00:19:11.311 "num_base_bdevs_discovered": 2, 00:19:11.311 "num_base_bdevs_operational": 2, 00:19:11.311 "process": { 00:19:11.311 "type": "rebuild", 00:19:11.311 "target": "spare", 00:19:11.311 "progress": { 00:19:11.311 "blocks": 5888, 00:19:11.311 "percent": 74 00:19:11.311 } 00:19:11.311 }, 00:19:11.311 "base_bdevs_list": [ 00:19:11.311 { 00:19:11.311 "name": "spare", 00:19:11.311 "uuid": "18914fb4-053a-588a-9858-ca6a139fb225", 00:19:11.311 "is_configured": true, 00:19:11.311 "data_offset": 256, 00:19:11.311 "data_size": 7936 00:19:11.311 }, 00:19:11.311 { 00:19:11.311 "name": "BaseBdev2", 00:19:11.311 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:11.311 "is_configured": true, 00:19:11.311 "data_offset": 256, 00:19:11.311 "data_size": 7936 00:19:11.311 } 00:19:11.311 ] 00:19:11.311 }' 00:19:11.311 19:08:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.311 19:08:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:11.311 19:08:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.569 19:08:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:11.569 19:08:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:12.141 [2024-11-26 19:08:38.587460] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:12.141 [2024-11-26 19:08:38.587644] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:12.141 [2024-11-26 19:08:38.587950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.399 19:08:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:12.399 19:08:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.399 19:08:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.399 19:08:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:12.399 19:08:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:12.399 19:08:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.399 19:08:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.399 19:08:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.399 19:08:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.399 19:08:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.399 19:08:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.399 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.399 "name": "raid_bdev1", 00:19:12.399 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:12.399 "strip_size_kb": 0, 00:19:12.399 "state": "online", 00:19:12.399 "raid_level": "raid1", 00:19:12.399 "superblock": true, 00:19:12.399 "num_base_bdevs": 2, 00:19:12.399 "num_base_bdevs_discovered": 2, 00:19:12.399 "num_base_bdevs_operational": 2, 00:19:12.399 "base_bdevs_list": [ 00:19:12.399 { 00:19:12.399 "name": "spare", 00:19:12.399 "uuid": "18914fb4-053a-588a-9858-ca6a139fb225", 00:19:12.399 "is_configured": true, 00:19:12.399 "data_offset": 256, 00:19:12.399 "data_size": 7936 00:19:12.399 }, 00:19:12.399 { 00:19:12.399 "name": "BaseBdev2", 00:19:12.399 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:12.399 "is_configured": true, 00:19:12.399 "data_offset": 256, 00:19:12.399 "data_size": 7936 00:19:12.399 } 00:19:12.399 ] 00:19:12.399 }' 00:19:12.399 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.657 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:12.657 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.657 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:12.657 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:19:12.657 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:12.657 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.657 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:12.657 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:12.657 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.657 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.658 "name": "raid_bdev1", 00:19:12.658 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:12.658 "strip_size_kb": 0, 00:19:12.658 "state": "online", 00:19:12.658 "raid_level": "raid1", 00:19:12.658 "superblock": true, 00:19:12.658 "num_base_bdevs": 2, 00:19:12.658 "num_base_bdevs_discovered": 2, 00:19:12.658 "num_base_bdevs_operational": 2, 00:19:12.658 "base_bdevs_list": [ 00:19:12.658 { 00:19:12.658 "name": "spare", 00:19:12.658 "uuid": "18914fb4-053a-588a-9858-ca6a139fb225", 00:19:12.658 "is_configured": true, 00:19:12.658 "data_offset": 256, 00:19:12.658 "data_size": 7936 00:19:12.658 }, 00:19:12.658 { 00:19:12.658 "name": "BaseBdev2", 00:19:12.658 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:12.658 "is_configured": true, 00:19:12.658 "data_offset": 256, 00:19:12.658 "data_size": 7936 00:19:12.658 } 00:19:12.658 ] 00:19:12.658 }' 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:12.658 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.915 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.915 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.915 "name": "raid_bdev1", 00:19:12.915 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:12.915 "strip_size_kb": 0, 00:19:12.915 "state": "online", 00:19:12.915 "raid_level": "raid1", 00:19:12.915 "superblock": true, 00:19:12.915 "num_base_bdevs": 2, 00:19:12.915 "num_base_bdevs_discovered": 2, 00:19:12.915 "num_base_bdevs_operational": 2, 00:19:12.915 "base_bdevs_list": [ 00:19:12.915 { 00:19:12.915 "name": "spare", 00:19:12.915 "uuid": "18914fb4-053a-588a-9858-ca6a139fb225", 00:19:12.915 "is_configured": true, 00:19:12.915 "data_offset": 256, 00:19:12.915 "data_size": 7936 00:19:12.915 }, 00:19:12.915 { 00:19:12.915 "name": "BaseBdev2", 00:19:12.915 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:12.915 "is_configured": true, 00:19:12.915 "data_offset": 256, 00:19:12.915 "data_size": 7936 00:19:12.915 } 00:19:12.915 ] 00:19:12.915 }' 00:19:12.915 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.915 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.519 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:13.519 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.519 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.519 [2024-11-26 19:08:39.826901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:13.519 [2024-11-26 19:08:39.826951] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:13.519 [2024-11-26 19:08:39.827069] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:13.519 [2024-11-26 19:08:39.827175] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:13.519 [2024-11-26 19:08:39.827196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:13.519 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.519 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.519 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.519 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:19:13.519 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.519 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.519 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:13.519 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:13.519 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:13.519 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:13.519 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:13.519 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:13.519 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:13.519 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:13.519 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:13.519 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:13.519 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:13.519 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:13.519 19:08:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:13.791 /dev/nbd0 00:19:13.791 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:13.792 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:13.792 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:13.792 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:13.792 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:13.792 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:13.792 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:13.792 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:13.792 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:13.792 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:13.792 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:13.792 1+0 records in 00:19:13.792 1+0 records out 00:19:13.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339984 s, 12.0 MB/s 00:19:13.792 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:13.792 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:13.792 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:13.792 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:13.792 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:13.792 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:13.792 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:13.792 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:14.088 /dev/nbd1 00:19:14.088 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:14.088 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:14.088 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:14.088 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:14.088 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:14.088 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:14.088 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:14.088 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:14.088 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:14.088 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:14.088 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:14.088 1+0 records in 00:19:14.088 1+0 records out 00:19:14.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532148 s, 7.7 MB/s 00:19:14.088 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:14.088 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:14.088 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:14.088 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:14.088 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:14.088 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:14.088 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:14.088 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:14.347 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:14.347 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:14.347 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:14.347 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:14.347 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:14.347 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:14.347 19:08:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:14.605 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:14.605 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:14.605 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:14.605 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:14.605 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:14.605 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:14.605 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:14.605 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:14.605 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:14.605 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:14.863 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:14.863 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:14.863 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:14.863 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:14.863 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:14.863 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:14.863 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:14.863 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:14.863 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:14.863 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:14.863 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.863 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.122 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.122 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:15.122 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.122 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.122 [2024-11-26 19:08:41.488376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:15.122 [2024-11-26 19:08:41.488462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.122 [2024-11-26 19:08:41.488503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:15.122 [2024-11-26 19:08:41.488519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.122 [2024-11-26 19:08:41.491705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.122 [2024-11-26 19:08:41.491749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:15.122 [2024-11-26 19:08:41.491889] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:15.122 [2024-11-26 19:08:41.491984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:15.122 [2024-11-26 19:08:41.492211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:15.122 spare 00:19:15.122 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.122 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:15.122 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.122 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.122 [2024-11-26 19:08:41.592391] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:15.122 [2024-11-26 19:08:41.592484] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:15.122 [2024-11-26 19:08:41.592994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:15.122 [2024-11-26 19:08:41.593338] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:15.122 [2024-11-26 19:08:41.593366] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:15.122 [2024-11-26 19:08:41.593686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.122 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.122 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:15.122 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.123 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.123 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.123 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.123 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:15.123 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.123 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.123 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.123 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.123 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.123 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.123 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.123 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.123 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.123 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.123 "name": "raid_bdev1", 00:19:15.123 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:15.123 "strip_size_kb": 0, 00:19:15.123 "state": "online", 00:19:15.123 "raid_level": "raid1", 00:19:15.123 "superblock": true, 00:19:15.123 "num_base_bdevs": 2, 00:19:15.123 "num_base_bdevs_discovered": 2, 00:19:15.123 "num_base_bdevs_operational": 2, 00:19:15.123 "base_bdevs_list": [ 00:19:15.123 { 00:19:15.123 "name": "spare", 00:19:15.123 "uuid": "18914fb4-053a-588a-9858-ca6a139fb225", 00:19:15.123 "is_configured": true, 00:19:15.123 "data_offset": 256, 00:19:15.123 "data_size": 7936 00:19:15.123 }, 00:19:15.123 { 00:19:15.123 "name": "BaseBdev2", 00:19:15.123 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:15.123 "is_configured": true, 00:19:15.123 "data_offset": 256, 00:19:15.123 "data_size": 7936 00:19:15.123 } 00:19:15.123 ] 00:19:15.123 }' 00:19:15.123 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.123 19:08:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.689 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:15.689 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.689 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:15.689 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:15.689 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.690 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.690 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.690 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.690 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.690 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.690 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.690 "name": "raid_bdev1", 00:19:15.690 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:15.690 "strip_size_kb": 0, 00:19:15.690 "state": "online", 00:19:15.690 "raid_level": "raid1", 00:19:15.690 "superblock": true, 00:19:15.690 "num_base_bdevs": 2, 00:19:15.690 "num_base_bdevs_discovered": 2, 00:19:15.690 "num_base_bdevs_operational": 2, 00:19:15.690 "base_bdevs_list": [ 00:19:15.690 { 00:19:15.690 "name": "spare", 00:19:15.690 "uuid": "18914fb4-053a-588a-9858-ca6a139fb225", 00:19:15.690 "is_configured": true, 00:19:15.690 "data_offset": 256, 00:19:15.690 "data_size": 7936 00:19:15.690 }, 00:19:15.690 { 00:19:15.690 "name": "BaseBdev2", 00:19:15.690 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:15.690 "is_configured": true, 00:19:15.690 "data_offset": 256, 00:19:15.690 "data_size": 7936 00:19:15.690 } 00:19:15.690 ] 00:19:15.690 }' 00:19:15.690 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.690 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:15.690 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.690 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:15.690 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.690 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.690 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.690 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:15.690 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.948 [2024-11-26 19:08:42.349830] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.948 "name": "raid_bdev1", 00:19:15.948 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:15.948 "strip_size_kb": 0, 00:19:15.948 "state": "online", 00:19:15.948 "raid_level": "raid1", 00:19:15.948 "superblock": true, 00:19:15.948 "num_base_bdevs": 2, 00:19:15.948 "num_base_bdevs_discovered": 1, 00:19:15.948 "num_base_bdevs_operational": 1, 00:19:15.948 "base_bdevs_list": [ 00:19:15.948 { 00:19:15.948 "name": null, 00:19:15.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.948 "is_configured": false, 00:19:15.948 "data_offset": 0, 00:19:15.948 "data_size": 7936 00:19:15.948 }, 00:19:15.948 { 00:19:15.948 "name": "BaseBdev2", 00:19:15.948 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:15.948 "is_configured": true, 00:19:15.948 "data_offset": 256, 00:19:15.948 "data_size": 7936 00:19:15.948 } 00:19:15.948 ] 00:19:15.948 }' 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.948 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.514 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:16.514 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.514 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.514 [2024-11-26 19:08:42.890001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:16.514 [2024-11-26 19:08:42.890343] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:16.514 [2024-11-26 19:08:42.890403] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:16.514 [2024-11-26 19:08:42.890464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:16.514 [2024-11-26 19:08:42.909172] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:16.514 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.514 19:08:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:16.514 [2024-11-26 19:08:42.912193] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:17.544 19:08:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.544 19:08:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.544 19:08:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:17.544 19:08:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:17.544 19:08:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.544 19:08:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.544 19:08:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.544 19:08:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.545 19:08:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.545 19:08:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.545 19:08:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.545 "name": "raid_bdev1", 00:19:17.545 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:17.545 "strip_size_kb": 0, 00:19:17.545 "state": "online", 00:19:17.545 "raid_level": "raid1", 00:19:17.545 "superblock": true, 00:19:17.545 "num_base_bdevs": 2, 00:19:17.545 "num_base_bdevs_discovered": 2, 00:19:17.545 "num_base_bdevs_operational": 2, 00:19:17.545 "process": { 00:19:17.545 "type": "rebuild", 00:19:17.545 "target": "spare", 00:19:17.545 "progress": { 00:19:17.545 "blocks": 2560, 00:19:17.545 "percent": 32 00:19:17.545 } 00:19:17.545 }, 00:19:17.545 "base_bdevs_list": [ 00:19:17.545 { 00:19:17.545 "name": "spare", 00:19:17.545 "uuid": "18914fb4-053a-588a-9858-ca6a139fb225", 00:19:17.545 "is_configured": true, 00:19:17.545 "data_offset": 256, 00:19:17.545 "data_size": 7936 00:19:17.545 }, 00:19:17.545 { 00:19:17.545 "name": "BaseBdev2", 00:19:17.545 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:17.545 "is_configured": true, 00:19:17.545 "data_offset": 256, 00:19:17.545 "data_size": 7936 00:19:17.545 } 00:19:17.545 ] 00:19:17.545 }' 00:19:17.545 19:08:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.545 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:17.545 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.545 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.545 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:17.545 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.545 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.545 [2024-11-26 19:08:44.086451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:17.545 [2024-11-26 19:08:44.124094] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:17.545 [2024-11-26 19:08:44.124205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.545 [2024-11-26 19:08:44.124232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:17.545 [2024-11-26 19:08:44.124247] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:17.818 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.819 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:17.819 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.819 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.819 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:17.819 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:17.819 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:17.819 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.819 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.819 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.819 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.819 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.819 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.819 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.819 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.819 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.819 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.819 "name": "raid_bdev1", 00:19:17.819 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:17.819 "strip_size_kb": 0, 00:19:17.819 "state": "online", 00:19:17.819 "raid_level": "raid1", 00:19:17.819 "superblock": true, 00:19:17.819 "num_base_bdevs": 2, 00:19:17.819 "num_base_bdevs_discovered": 1, 00:19:17.819 "num_base_bdevs_operational": 1, 00:19:17.819 "base_bdevs_list": [ 00:19:17.819 { 00:19:17.819 "name": null, 00:19:17.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.819 "is_configured": false, 00:19:17.819 "data_offset": 0, 00:19:17.819 "data_size": 7936 00:19:17.819 }, 00:19:17.819 { 00:19:17.819 "name": "BaseBdev2", 00:19:17.819 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:17.819 "is_configured": true, 00:19:17.819 "data_offset": 256, 00:19:17.819 "data_size": 7936 00:19:17.819 } 00:19:17.819 ] 00:19:17.819 }' 00:19:17.819 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.819 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.078 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:18.078 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.078 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.078 [2024-11-26 19:08:44.658417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:18.078 [2024-11-26 19:08:44.658522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.078 [2024-11-26 19:08:44.658556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:18.078 [2024-11-26 19:08:44.658576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.078 [2024-11-26 19:08:44.659235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.078 [2024-11-26 19:08:44.659279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:18.078 [2024-11-26 19:08:44.659479] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:18.078 [2024-11-26 19:08:44.659514] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:18.078 [2024-11-26 19:08:44.659527] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:18.078 [2024-11-26 19:08:44.659564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.078 [2024-11-26 19:08:44.676240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:18.078 spare 00:19:18.078 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.078 19:08:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:18.078 [2024-11-26 19:08:44.679017] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:19.456 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:19.456 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.456 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:19.456 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:19.456 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.456 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.456 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.456 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.456 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.456 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.456 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.456 "name": "raid_bdev1", 00:19:19.456 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:19.456 "strip_size_kb": 0, 00:19:19.456 "state": "online", 00:19:19.456 "raid_level": "raid1", 00:19:19.456 "superblock": true, 00:19:19.456 "num_base_bdevs": 2, 00:19:19.456 "num_base_bdevs_discovered": 2, 00:19:19.456 "num_base_bdevs_operational": 2, 00:19:19.456 "process": { 00:19:19.456 "type": "rebuild", 00:19:19.456 "target": "spare", 00:19:19.456 "progress": { 00:19:19.456 "blocks": 2560, 00:19:19.456 "percent": 32 00:19:19.456 } 00:19:19.456 }, 00:19:19.456 "base_bdevs_list": [ 00:19:19.456 { 00:19:19.456 "name": "spare", 00:19:19.456 "uuid": "18914fb4-053a-588a-9858-ca6a139fb225", 00:19:19.456 "is_configured": true, 00:19:19.456 "data_offset": 256, 00:19:19.456 "data_size": 7936 00:19:19.456 }, 00:19:19.456 { 00:19:19.456 "name": "BaseBdev2", 00:19:19.456 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:19.456 "is_configured": true, 00:19:19.456 "data_offset": 256, 00:19:19.456 "data_size": 7936 00:19:19.456 } 00:19:19.456 ] 00:19:19.456 }' 00:19:19.456 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.456 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:19.456 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.456 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:19.456 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:19.456 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.456 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.456 [2024-11-26 19:08:45.849301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:19.456 [2024-11-26 19:08:45.891062] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:19.456 [2024-11-26 19:08:45.891179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.456 [2024-11-26 19:08:45.891209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:19.456 [2024-11-26 19:08:45.891221] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:19.456 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.456 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:19.457 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.457 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.457 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.457 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.457 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:19.457 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.457 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.457 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.457 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.457 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.457 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.457 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.457 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.457 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.457 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.457 "name": "raid_bdev1", 00:19:19.457 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:19.457 "strip_size_kb": 0, 00:19:19.457 "state": "online", 00:19:19.457 "raid_level": "raid1", 00:19:19.457 "superblock": true, 00:19:19.457 "num_base_bdevs": 2, 00:19:19.457 "num_base_bdevs_discovered": 1, 00:19:19.457 "num_base_bdevs_operational": 1, 00:19:19.457 "base_bdevs_list": [ 00:19:19.457 { 00:19:19.457 "name": null, 00:19:19.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.457 "is_configured": false, 00:19:19.457 "data_offset": 0, 00:19:19.457 "data_size": 7936 00:19:19.457 }, 00:19:19.457 { 00:19:19.457 "name": "BaseBdev2", 00:19:19.457 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:19.457 "is_configured": true, 00:19:19.457 "data_offset": 256, 00:19:19.457 "data_size": 7936 00:19:19.457 } 00:19:19.457 ] 00:19:19.457 }' 00:19:19.457 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.457 19:08:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.024 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:20.024 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.024 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:20.024 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:20.024 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.024 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.024 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.024 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.024 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.024 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.024 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.024 "name": "raid_bdev1", 00:19:20.024 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:20.024 "strip_size_kb": 0, 00:19:20.024 "state": "online", 00:19:20.024 "raid_level": "raid1", 00:19:20.024 "superblock": true, 00:19:20.024 "num_base_bdevs": 2, 00:19:20.024 "num_base_bdevs_discovered": 1, 00:19:20.024 "num_base_bdevs_operational": 1, 00:19:20.024 "base_bdevs_list": [ 00:19:20.024 { 00:19:20.024 "name": null, 00:19:20.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.024 "is_configured": false, 00:19:20.024 "data_offset": 0, 00:19:20.024 "data_size": 7936 00:19:20.024 }, 00:19:20.024 { 00:19:20.024 "name": "BaseBdev2", 00:19:20.024 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:20.024 "is_configured": true, 00:19:20.024 "data_offset": 256, 00:19:20.024 "data_size": 7936 00:19:20.025 } 00:19:20.025 ] 00:19:20.025 }' 00:19:20.025 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.025 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:20.025 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.025 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:20.025 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:20.025 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.025 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.025 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.025 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:20.025 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.025 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.025 [2024-11-26 19:08:46.641939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:20.025 [2024-11-26 19:08:46.642035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.025 [2024-11-26 19:08:46.642081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:20.025 [2024-11-26 19:08:46.642112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.025 [2024-11-26 19:08:46.642799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.025 [2024-11-26 19:08:46.642836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:20.025 [2024-11-26 19:08:46.642978] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:20.025 [2024-11-26 19:08:46.643001] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:20.025 [2024-11-26 19:08:46.643018] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:20.025 [2024-11-26 19:08:46.643037] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:20.283 BaseBdev1 00:19:20.283 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.283 19:08:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:21.217 19:08:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:21.217 19:08:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.217 19:08:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.217 19:08:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:21.217 19:08:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:21.217 19:08:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:21.217 19:08:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.217 19:08:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.217 19:08:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.217 19:08:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.217 19:08:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.217 19:08:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.217 19:08:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.217 19:08:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.217 19:08:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.217 19:08:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.217 "name": "raid_bdev1", 00:19:21.217 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:21.217 "strip_size_kb": 0, 00:19:21.217 "state": "online", 00:19:21.217 "raid_level": "raid1", 00:19:21.217 "superblock": true, 00:19:21.217 "num_base_bdevs": 2, 00:19:21.217 "num_base_bdevs_discovered": 1, 00:19:21.217 "num_base_bdevs_operational": 1, 00:19:21.217 "base_bdevs_list": [ 00:19:21.217 { 00:19:21.217 "name": null, 00:19:21.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.217 "is_configured": false, 00:19:21.217 "data_offset": 0, 00:19:21.217 "data_size": 7936 00:19:21.217 }, 00:19:21.217 { 00:19:21.217 "name": "BaseBdev2", 00:19:21.217 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:21.217 "is_configured": true, 00:19:21.217 "data_offset": 256, 00:19:21.217 "data_size": 7936 00:19:21.217 } 00:19:21.217 ] 00:19:21.217 }' 00:19:21.217 19:08:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.217 19:08:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.786 "name": "raid_bdev1", 00:19:21.786 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:21.786 "strip_size_kb": 0, 00:19:21.786 "state": "online", 00:19:21.786 "raid_level": "raid1", 00:19:21.786 "superblock": true, 00:19:21.786 "num_base_bdevs": 2, 00:19:21.786 "num_base_bdevs_discovered": 1, 00:19:21.786 "num_base_bdevs_operational": 1, 00:19:21.786 "base_bdevs_list": [ 00:19:21.786 { 00:19:21.786 "name": null, 00:19:21.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.786 "is_configured": false, 00:19:21.786 "data_offset": 0, 00:19:21.786 "data_size": 7936 00:19:21.786 }, 00:19:21.786 { 00:19:21.786 "name": "BaseBdev2", 00:19:21.786 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:21.786 "is_configured": true, 00:19:21.786 "data_offset": 256, 00:19:21.786 "data_size": 7936 00:19:21.786 } 00:19:21.786 ] 00:19:21.786 }' 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.786 [2024-11-26 19:08:48.374480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:21.786 [2024-11-26 19:08:48.374743] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:21.786 [2024-11-26 19:08:48.374780] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:21.786 request: 00:19:21.786 { 00:19:21.786 "base_bdev": "BaseBdev1", 00:19:21.786 "raid_bdev": "raid_bdev1", 00:19:21.786 "method": "bdev_raid_add_base_bdev", 00:19:21.786 "req_id": 1 00:19:21.786 } 00:19:21.786 Got JSON-RPC error response 00:19:21.786 response: 00:19:21.786 { 00:19:21.786 "code": -22, 00:19:21.786 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:21.786 } 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:21.786 19:08:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:23.164 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:23.164 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.164 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.164 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:23.164 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:23.164 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:23.164 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.164 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.164 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.164 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.164 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.164 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.164 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.164 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.164 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.164 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.164 "name": "raid_bdev1", 00:19:23.164 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:23.164 "strip_size_kb": 0, 00:19:23.164 "state": "online", 00:19:23.164 "raid_level": "raid1", 00:19:23.164 "superblock": true, 00:19:23.164 "num_base_bdevs": 2, 00:19:23.164 "num_base_bdevs_discovered": 1, 00:19:23.164 "num_base_bdevs_operational": 1, 00:19:23.164 "base_bdevs_list": [ 00:19:23.164 { 00:19:23.164 "name": null, 00:19:23.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.164 "is_configured": false, 00:19:23.164 "data_offset": 0, 00:19:23.164 "data_size": 7936 00:19:23.164 }, 00:19:23.164 { 00:19:23.164 "name": "BaseBdev2", 00:19:23.164 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:23.164 "is_configured": true, 00:19:23.164 "data_offset": 256, 00:19:23.164 "data_size": 7936 00:19:23.164 } 00:19:23.164 ] 00:19:23.164 }' 00:19:23.164 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.164 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.423 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:23.423 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.423 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:23.423 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:23.423 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.423 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.423 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.423 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.423 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.423 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.423 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.423 "name": "raid_bdev1", 00:19:23.423 "uuid": "be52d8dd-7f50-4b91-9323-89f0fb7d55ed", 00:19:23.423 "strip_size_kb": 0, 00:19:23.423 "state": "online", 00:19:23.423 "raid_level": "raid1", 00:19:23.423 "superblock": true, 00:19:23.423 "num_base_bdevs": 2, 00:19:23.423 "num_base_bdevs_discovered": 1, 00:19:23.423 "num_base_bdevs_operational": 1, 00:19:23.423 "base_bdevs_list": [ 00:19:23.423 { 00:19:23.423 "name": null, 00:19:23.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.423 "is_configured": false, 00:19:23.423 "data_offset": 0, 00:19:23.423 "data_size": 7936 00:19:23.423 }, 00:19:23.423 { 00:19:23.423 "name": "BaseBdev2", 00:19:23.423 "uuid": "7eca92c2-e3fe-55f4-a221-0a107f80d219", 00:19:23.423 "is_configured": true, 00:19:23.423 "data_offset": 256, 00:19:23.423 "data_size": 7936 00:19:23.423 } 00:19:23.423 ] 00:19:23.423 }' 00:19:23.423 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.423 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:23.423 19:08:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.682 19:08:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:23.682 19:08:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87453 00:19:23.682 19:08:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 87453 ']' 00:19:23.682 19:08:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 87453 00:19:23.682 19:08:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:19:23.682 19:08:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.682 19:08:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87453 00:19:23.682 19:08:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:23.682 19:08:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:23.682 killing process with pid 87453 00:19:23.682 19:08:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87453' 00:19:23.682 Received shutdown signal, test time was about 60.000000 seconds 00:19:23.682 00:19:23.682 Latency(us) 00:19:23.682 [2024-11-26T19:08:50.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.682 [2024-11-26T19:08:50.305Z] =================================================================================================================== 00:19:23.682 [2024-11-26T19:08:50.305Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:23.682 19:08:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 87453 00:19:23.682 [2024-11-26 19:08:50.084063] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:23.682 19:08:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 87453 00:19:23.682 [2024-11-26 19:08:50.084245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:23.682 [2024-11-26 19:08:50.084338] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:23.682 [2024-11-26 19:08:50.084361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:23.941 [2024-11-26 19:08:50.378634] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:25.319 19:08:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:19:25.319 00:19:25.319 real 0m22.214s 00:19:25.319 user 0m30.035s 00:19:25.319 sys 0m2.766s 00:19:25.319 19:08:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:25.319 19:08:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.319 ************************************ 00:19:25.319 END TEST raid_rebuild_test_sb_4k 00:19:25.319 ************************************ 00:19:25.319 19:08:51 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:19:25.319 19:08:51 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:19:25.319 19:08:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:25.319 19:08:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:25.319 19:08:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:25.319 ************************************ 00:19:25.319 START TEST raid_state_function_test_sb_md_separate 00:19:25.319 ************************************ 00:19:25.319 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:25.319 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:25.319 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:25.319 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:25.319 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:25.319 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:25.319 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:25.319 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:25.319 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:25.319 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:25.319 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:25.319 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:25.319 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:25.320 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:25.320 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:25.320 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:25.320 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:25.320 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:25.320 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:25.320 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:25.320 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:25.320 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:25.320 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:25.320 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=88156 00:19:25.320 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:25.320 Process raid pid: 88156 00:19:25.320 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88156' 00:19:25.320 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 88156 00:19:25.320 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88156 ']' 00:19:25.320 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.320 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.320 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.320 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.320 19:08:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:25.320 [2024-11-26 19:08:51.747936] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:19:25.320 [2024-11-26 19:08:51.748129] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.577 [2024-11-26 19:08:51.947508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.578 [2024-11-26 19:08:52.125025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.835 [2024-11-26 19:08:52.384326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:25.835 [2024-11-26 19:08:52.384397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:26.401 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.401 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:26.401 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:26.401 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.401 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:26.401 [2024-11-26 19:08:52.870759] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:26.401 [2024-11-26 19:08:52.870829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:26.401 [2024-11-26 19:08:52.870848] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:26.401 [2024-11-26 19:08:52.870865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:26.401 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.401 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:26.401 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:26.401 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:26.401 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:26.401 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:26.401 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:26.401 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.401 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.401 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.401 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.401 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.401 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.401 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:26.401 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.401 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.401 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.401 "name": "Existed_Raid", 00:19:26.401 "uuid": "6d8c86ae-9afb-467e-952d-db9a212aa1a6", 00:19:26.401 "strip_size_kb": 0, 00:19:26.401 "state": "configuring", 00:19:26.401 "raid_level": "raid1", 00:19:26.401 "superblock": true, 00:19:26.401 "num_base_bdevs": 2, 00:19:26.401 "num_base_bdevs_discovered": 0, 00:19:26.401 "num_base_bdevs_operational": 2, 00:19:26.401 "base_bdevs_list": [ 00:19:26.401 { 00:19:26.401 "name": "BaseBdev1", 00:19:26.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.401 "is_configured": false, 00:19:26.401 "data_offset": 0, 00:19:26.401 "data_size": 0 00:19:26.401 }, 00:19:26.401 { 00:19:26.401 "name": "BaseBdev2", 00:19:26.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.401 "is_configured": false, 00:19:26.401 "data_offset": 0, 00:19:26.402 "data_size": 0 00:19:26.402 } 00:19:26.402 ] 00:19:26.402 }' 00:19:26.402 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.402 19:08:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:26.989 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:26.989 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.989 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:26.989 [2024-11-26 19:08:53.406878] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:26.989 [2024-11-26 19:08:53.406942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:26.989 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:26.990 [2024-11-26 19:08:53.414810] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:26.990 [2024-11-26 19:08:53.414894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:26.990 [2024-11-26 19:08:53.414910] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:26.990 [2024-11-26 19:08:53.414934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:26.990 [2024-11-26 19:08:53.463321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:26.990 BaseBdev1 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:26.990 [ 00:19:26.990 { 00:19:26.990 "name": "BaseBdev1", 00:19:26.990 "aliases": [ 00:19:26.990 "d6f220d9-0368-48d6-ac0f-d0b27057d97d" 00:19:26.990 ], 00:19:26.990 "product_name": "Malloc disk", 00:19:26.990 "block_size": 4096, 00:19:26.990 "num_blocks": 8192, 00:19:26.990 "uuid": "d6f220d9-0368-48d6-ac0f-d0b27057d97d", 00:19:26.990 "md_size": 32, 00:19:26.990 "md_interleave": false, 00:19:26.990 "dif_type": 0, 00:19:26.990 "assigned_rate_limits": { 00:19:26.990 "rw_ios_per_sec": 0, 00:19:26.990 "rw_mbytes_per_sec": 0, 00:19:26.990 "r_mbytes_per_sec": 0, 00:19:26.990 "w_mbytes_per_sec": 0 00:19:26.990 }, 00:19:26.990 "claimed": true, 00:19:26.990 "claim_type": "exclusive_write", 00:19:26.990 "zoned": false, 00:19:26.990 "supported_io_types": { 00:19:26.990 "read": true, 00:19:26.990 "write": true, 00:19:26.990 "unmap": true, 00:19:26.990 "flush": true, 00:19:26.990 "reset": true, 00:19:26.990 "nvme_admin": false, 00:19:26.990 "nvme_io": false, 00:19:26.990 "nvme_io_md": false, 00:19:26.990 "write_zeroes": true, 00:19:26.990 "zcopy": true, 00:19:26.990 "get_zone_info": false, 00:19:26.990 "zone_management": false, 00:19:26.990 "zone_append": false, 00:19:26.990 "compare": false, 00:19:26.990 "compare_and_write": false, 00:19:26.990 "abort": true, 00:19:26.990 "seek_hole": false, 00:19:26.990 "seek_data": false, 00:19:26.990 "copy": true, 00:19:26.990 "nvme_iov_md": false 00:19:26.990 }, 00:19:26.990 "memory_domains": [ 00:19:26.990 { 00:19:26.990 "dma_device_id": "system", 00:19:26.990 "dma_device_type": 1 00:19:26.990 }, 00:19:26.990 { 00:19:26.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.990 "dma_device_type": 2 00:19:26.990 } 00:19:26.990 ], 00:19:26.990 "driver_specific": {} 00:19:26.990 } 00:19:26.990 ] 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.990 "name": "Existed_Raid", 00:19:26.990 "uuid": "dcae1b54-e317-4dfa-ae0a-e65fc6f6fb38", 00:19:26.990 "strip_size_kb": 0, 00:19:26.990 "state": "configuring", 00:19:26.990 "raid_level": "raid1", 00:19:26.990 "superblock": true, 00:19:26.990 "num_base_bdevs": 2, 00:19:26.990 "num_base_bdevs_discovered": 1, 00:19:26.990 "num_base_bdevs_operational": 2, 00:19:26.990 "base_bdevs_list": [ 00:19:26.990 { 00:19:26.990 "name": "BaseBdev1", 00:19:26.990 "uuid": "d6f220d9-0368-48d6-ac0f-d0b27057d97d", 00:19:26.990 "is_configured": true, 00:19:26.990 "data_offset": 256, 00:19:26.990 "data_size": 7936 00:19:26.990 }, 00:19:26.990 { 00:19:26.990 "name": "BaseBdev2", 00:19:26.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.990 "is_configured": false, 00:19:26.990 "data_offset": 0, 00:19:26.990 "data_size": 0 00:19:26.990 } 00:19:26.990 ] 00:19:26.990 }' 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.990 19:08:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:27.557 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:27.557 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.557 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:27.557 [2024-11-26 19:08:54.035662] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:27.557 [2024-11-26 19:08:54.035778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:27.557 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.557 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:27.557 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.557 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:27.557 [2024-11-26 19:08:54.043676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:27.557 [2024-11-26 19:08:54.046400] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:27.557 [2024-11-26 19:08:54.046460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:27.557 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.557 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:27.557 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:27.557 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:27.557 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:27.557 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:27.557 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.557 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.557 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:27.557 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.557 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.557 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.557 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.557 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.558 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.558 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.558 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:27.558 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.558 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.558 "name": "Existed_Raid", 00:19:27.558 "uuid": "5ca7274f-bc0c-471c-a40e-5f761b535be3", 00:19:27.558 "strip_size_kb": 0, 00:19:27.558 "state": "configuring", 00:19:27.558 "raid_level": "raid1", 00:19:27.558 "superblock": true, 00:19:27.558 "num_base_bdevs": 2, 00:19:27.558 "num_base_bdevs_discovered": 1, 00:19:27.558 "num_base_bdevs_operational": 2, 00:19:27.558 "base_bdevs_list": [ 00:19:27.558 { 00:19:27.558 "name": "BaseBdev1", 00:19:27.558 "uuid": "d6f220d9-0368-48d6-ac0f-d0b27057d97d", 00:19:27.558 "is_configured": true, 00:19:27.558 "data_offset": 256, 00:19:27.558 "data_size": 7936 00:19:27.558 }, 00:19:27.558 { 00:19:27.558 "name": "BaseBdev2", 00:19:27.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.558 "is_configured": false, 00:19:27.558 "data_offset": 0, 00:19:27.558 "data_size": 0 00:19:27.558 } 00:19:27.558 ] 00:19:27.558 }' 00:19:27.558 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.558 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.124 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:19:28.124 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.124 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.124 [2024-11-26 19:08:54.626870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:28.124 [2024-11-26 19:08:54.627191] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:28.124 [2024-11-26 19:08:54.627215] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:28.124 [2024-11-26 19:08:54.627333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:28.124 [2024-11-26 19:08:54.627504] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:28.124 [2024-11-26 19:08:54.627535] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:28.124 BaseBdev2 00:19:28.124 [2024-11-26 19:08:54.627655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:28.124 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.124 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:28.124 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:28.124 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.125 [ 00:19:28.125 { 00:19:28.125 "name": "BaseBdev2", 00:19:28.125 "aliases": [ 00:19:28.125 "43aab83d-1754-4132-8d33-3b13e976daa0" 00:19:28.125 ], 00:19:28.125 "product_name": "Malloc disk", 00:19:28.125 "block_size": 4096, 00:19:28.125 "num_blocks": 8192, 00:19:28.125 "uuid": "43aab83d-1754-4132-8d33-3b13e976daa0", 00:19:28.125 "md_size": 32, 00:19:28.125 "md_interleave": false, 00:19:28.125 "dif_type": 0, 00:19:28.125 "assigned_rate_limits": { 00:19:28.125 "rw_ios_per_sec": 0, 00:19:28.125 "rw_mbytes_per_sec": 0, 00:19:28.125 "r_mbytes_per_sec": 0, 00:19:28.125 "w_mbytes_per_sec": 0 00:19:28.125 }, 00:19:28.125 "claimed": true, 00:19:28.125 "claim_type": "exclusive_write", 00:19:28.125 "zoned": false, 00:19:28.125 "supported_io_types": { 00:19:28.125 "read": true, 00:19:28.125 "write": true, 00:19:28.125 "unmap": true, 00:19:28.125 "flush": true, 00:19:28.125 "reset": true, 00:19:28.125 "nvme_admin": false, 00:19:28.125 "nvme_io": false, 00:19:28.125 "nvme_io_md": false, 00:19:28.125 "write_zeroes": true, 00:19:28.125 "zcopy": true, 00:19:28.125 "get_zone_info": false, 00:19:28.125 "zone_management": false, 00:19:28.125 "zone_append": false, 00:19:28.125 "compare": false, 00:19:28.125 "compare_and_write": false, 00:19:28.125 "abort": true, 00:19:28.125 "seek_hole": false, 00:19:28.125 "seek_data": false, 00:19:28.125 "copy": true, 00:19:28.125 "nvme_iov_md": false 00:19:28.125 }, 00:19:28.125 "memory_domains": [ 00:19:28.125 { 00:19:28.125 "dma_device_id": "system", 00:19:28.125 "dma_device_type": 1 00:19:28.125 }, 00:19:28.125 { 00:19:28.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.125 "dma_device_type": 2 00:19:28.125 } 00:19:28.125 ], 00:19:28.125 "driver_specific": {} 00:19:28.125 } 00:19:28.125 ] 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.125 "name": "Existed_Raid", 00:19:28.125 "uuid": "5ca7274f-bc0c-471c-a40e-5f761b535be3", 00:19:28.125 "strip_size_kb": 0, 00:19:28.125 "state": "online", 00:19:28.125 "raid_level": "raid1", 00:19:28.125 "superblock": true, 00:19:28.125 "num_base_bdevs": 2, 00:19:28.125 "num_base_bdevs_discovered": 2, 00:19:28.125 "num_base_bdevs_operational": 2, 00:19:28.125 "base_bdevs_list": [ 00:19:28.125 { 00:19:28.125 "name": "BaseBdev1", 00:19:28.125 "uuid": "d6f220d9-0368-48d6-ac0f-d0b27057d97d", 00:19:28.125 "is_configured": true, 00:19:28.125 "data_offset": 256, 00:19:28.125 "data_size": 7936 00:19:28.125 }, 00:19:28.125 { 00:19:28.125 "name": "BaseBdev2", 00:19:28.125 "uuid": "43aab83d-1754-4132-8d33-3b13e976daa0", 00:19:28.125 "is_configured": true, 00:19:28.125 "data_offset": 256, 00:19:28.125 "data_size": 7936 00:19:28.125 } 00:19:28.125 ] 00:19:28.125 }' 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.125 19:08:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.692 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:28.692 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:28.692 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:28.692 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:28.692 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:28.692 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:28.692 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:28.692 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:28.692 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.692 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.692 [2024-11-26 19:08:55.207541] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:28.692 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.692 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:28.692 "name": "Existed_Raid", 00:19:28.692 "aliases": [ 00:19:28.692 "5ca7274f-bc0c-471c-a40e-5f761b535be3" 00:19:28.692 ], 00:19:28.692 "product_name": "Raid Volume", 00:19:28.692 "block_size": 4096, 00:19:28.692 "num_blocks": 7936, 00:19:28.692 "uuid": "5ca7274f-bc0c-471c-a40e-5f761b535be3", 00:19:28.692 "md_size": 32, 00:19:28.692 "md_interleave": false, 00:19:28.692 "dif_type": 0, 00:19:28.692 "assigned_rate_limits": { 00:19:28.692 "rw_ios_per_sec": 0, 00:19:28.692 "rw_mbytes_per_sec": 0, 00:19:28.692 "r_mbytes_per_sec": 0, 00:19:28.692 "w_mbytes_per_sec": 0 00:19:28.692 }, 00:19:28.692 "claimed": false, 00:19:28.692 "zoned": false, 00:19:28.692 "supported_io_types": { 00:19:28.692 "read": true, 00:19:28.692 "write": true, 00:19:28.692 "unmap": false, 00:19:28.692 "flush": false, 00:19:28.692 "reset": true, 00:19:28.692 "nvme_admin": false, 00:19:28.692 "nvme_io": false, 00:19:28.692 "nvme_io_md": false, 00:19:28.692 "write_zeroes": true, 00:19:28.692 "zcopy": false, 00:19:28.693 "get_zone_info": false, 00:19:28.693 "zone_management": false, 00:19:28.693 "zone_append": false, 00:19:28.693 "compare": false, 00:19:28.693 "compare_and_write": false, 00:19:28.693 "abort": false, 00:19:28.693 "seek_hole": false, 00:19:28.693 "seek_data": false, 00:19:28.693 "copy": false, 00:19:28.693 "nvme_iov_md": false 00:19:28.693 }, 00:19:28.693 "memory_domains": [ 00:19:28.693 { 00:19:28.693 "dma_device_id": "system", 00:19:28.693 "dma_device_type": 1 00:19:28.693 }, 00:19:28.693 { 00:19:28.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.693 "dma_device_type": 2 00:19:28.693 }, 00:19:28.693 { 00:19:28.693 "dma_device_id": "system", 00:19:28.693 "dma_device_type": 1 00:19:28.693 }, 00:19:28.693 { 00:19:28.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.693 "dma_device_type": 2 00:19:28.693 } 00:19:28.693 ], 00:19:28.693 "driver_specific": { 00:19:28.693 "raid": { 00:19:28.693 "uuid": "5ca7274f-bc0c-471c-a40e-5f761b535be3", 00:19:28.693 "strip_size_kb": 0, 00:19:28.693 "state": "online", 00:19:28.693 "raid_level": "raid1", 00:19:28.693 "superblock": true, 00:19:28.693 "num_base_bdevs": 2, 00:19:28.693 "num_base_bdevs_discovered": 2, 00:19:28.693 "num_base_bdevs_operational": 2, 00:19:28.693 "base_bdevs_list": [ 00:19:28.693 { 00:19:28.693 "name": "BaseBdev1", 00:19:28.693 "uuid": "d6f220d9-0368-48d6-ac0f-d0b27057d97d", 00:19:28.693 "is_configured": true, 00:19:28.693 "data_offset": 256, 00:19:28.693 "data_size": 7936 00:19:28.693 }, 00:19:28.693 { 00:19:28.693 "name": "BaseBdev2", 00:19:28.693 "uuid": "43aab83d-1754-4132-8d33-3b13e976daa0", 00:19:28.693 "is_configured": true, 00:19:28.693 "data_offset": 256, 00:19:28.693 "data_size": 7936 00:19:28.693 } 00:19:28.693 ] 00:19:28.693 } 00:19:28.693 } 00:19:28.693 }' 00:19:28.693 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:28.951 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:28.951 BaseBdev2' 00:19:28.951 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:28.951 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:28.951 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:28.951 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:28.951 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.951 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.951 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:28.951 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.951 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:28.951 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:28.951 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:28.951 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:28.951 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:28.951 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.951 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.951 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.951 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:28.951 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:28.951 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:28.951 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.951 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.951 [2024-11-26 19:08:55.499345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:29.209 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.210 "name": "Existed_Raid", 00:19:29.210 "uuid": "5ca7274f-bc0c-471c-a40e-5f761b535be3", 00:19:29.210 "strip_size_kb": 0, 00:19:29.210 "state": "online", 00:19:29.210 "raid_level": "raid1", 00:19:29.210 "superblock": true, 00:19:29.210 "num_base_bdevs": 2, 00:19:29.210 "num_base_bdevs_discovered": 1, 00:19:29.210 "num_base_bdevs_operational": 1, 00:19:29.210 "base_bdevs_list": [ 00:19:29.210 { 00:19:29.210 "name": null, 00:19:29.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.210 "is_configured": false, 00:19:29.210 "data_offset": 0, 00:19:29.210 "data_size": 7936 00:19:29.210 }, 00:19:29.210 { 00:19:29.210 "name": "BaseBdev2", 00:19:29.210 "uuid": "43aab83d-1754-4132-8d33-3b13e976daa0", 00:19:29.210 "is_configured": true, 00:19:29.210 "data_offset": 256, 00:19:29.210 "data_size": 7936 00:19:29.210 } 00:19:29.210 ] 00:19:29.210 }' 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.210 19:08:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.776 [2024-11-26 19:08:56.181450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:29.776 [2024-11-26 19:08:56.181617] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:29.776 [2024-11-26 19:08:56.294716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:29.776 [2024-11-26 19:08:56.294801] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:29.776 [2024-11-26 19:08:56.294823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 88156 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88156 ']' 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88156 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.776 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88156 00:19:30.034 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:30.034 killing process with pid 88156 00:19:30.034 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:30.034 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88156' 00:19:30.034 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88156 00:19:30.034 [2024-11-26 19:08:56.402992] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:30.034 19:08:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88156 00:19:30.034 [2024-11-26 19:08:56.420201] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:31.408 19:08:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:19:31.408 00:19:31.408 real 0m6.049s 00:19:31.408 user 0m9.008s 00:19:31.408 sys 0m0.906s 00:19:31.408 19:08:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.408 19:08:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.408 ************************************ 00:19:31.408 END TEST raid_state_function_test_sb_md_separate 00:19:31.408 ************************************ 00:19:31.408 19:08:57 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:19:31.408 19:08:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:31.408 19:08:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.408 19:08:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:31.408 ************************************ 00:19:31.408 START TEST raid_superblock_test_md_separate 00:19:31.408 ************************************ 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=88414 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 88414 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88414 ']' 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.408 19:08:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.408 [2024-11-26 19:08:57.853720] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:19:31.408 [2024-11-26 19:08:57.853992] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88414 ] 00:19:31.666 [2024-11-26 19:08:58.049900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.666 [2024-11-26 19:08:58.236432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.924 [2024-11-26 19:08:58.528697] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:31.924 [2024-11-26 19:08:58.528791] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:32.497 19:08:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.497 19:08:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:32.497 19:08:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:32.497 19:08:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:32.497 19:08:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:32.497 19:08:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:32.497 19:08:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:32.497 19:08:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:32.497 19:08:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:32.497 19:08:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:32.497 19:08:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:19:32.497 19:08:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.497 19:08:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.497 malloc1 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.497 [2024-11-26 19:08:59.028316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:32.497 [2024-11-26 19:08:59.028413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.497 [2024-11-26 19:08:59.028453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:32.497 [2024-11-26 19:08:59.028474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.497 [2024-11-26 19:08:59.031680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.497 [2024-11-26 19:08:59.031746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:32.497 pt1 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.497 malloc2 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.497 [2024-11-26 19:08:59.097914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:32.497 [2024-11-26 19:08:59.098004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.497 [2024-11-26 19:08:59.098047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:32.497 [2024-11-26 19:08:59.098066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.497 [2024-11-26 19:08:59.101344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.497 [2024-11-26 19:08:59.101400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:32.497 pt2 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:32.497 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:32.498 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.498 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.498 [2024-11-26 19:08:59.110177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:32.498 [2024-11-26 19:08:59.113305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:32.498 [2024-11-26 19:08:59.113644] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:32.498 [2024-11-26 19:08:59.113673] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:32.498 [2024-11-26 19:08:59.113822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:32.498 [2024-11-26 19:08:59.114041] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:32.498 [2024-11-26 19:08:59.114079] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:32.498 [2024-11-26 19:08:59.114341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.498 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.498 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:32.498 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:32.498 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.498 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.498 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.498 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:32.498 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.498 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.755 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.755 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.755 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.755 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.755 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.755 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.755 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.755 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.755 "name": "raid_bdev1", 00:19:32.755 "uuid": "0141bb61-f492-4dcf-9d6e-f583a6ee3e99", 00:19:32.755 "strip_size_kb": 0, 00:19:32.755 "state": "online", 00:19:32.755 "raid_level": "raid1", 00:19:32.755 "superblock": true, 00:19:32.755 "num_base_bdevs": 2, 00:19:32.755 "num_base_bdevs_discovered": 2, 00:19:32.755 "num_base_bdevs_operational": 2, 00:19:32.755 "base_bdevs_list": [ 00:19:32.755 { 00:19:32.755 "name": "pt1", 00:19:32.755 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:32.755 "is_configured": true, 00:19:32.755 "data_offset": 256, 00:19:32.755 "data_size": 7936 00:19:32.755 }, 00:19:32.755 { 00:19:32.755 "name": "pt2", 00:19:32.755 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:32.755 "is_configured": true, 00:19:32.755 "data_offset": 256, 00:19:32.755 "data_size": 7936 00:19:32.755 } 00:19:32.755 ] 00:19:32.755 }' 00:19:32.755 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.755 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.013 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:33.013 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:33.013 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:33.013 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:33.013 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:33.013 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:33.013 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:33.013 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:33.013 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.013 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.013 [2024-11-26 19:08:59.622909] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:33.272 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.272 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:33.272 "name": "raid_bdev1", 00:19:33.272 "aliases": [ 00:19:33.272 "0141bb61-f492-4dcf-9d6e-f583a6ee3e99" 00:19:33.272 ], 00:19:33.272 "product_name": "Raid Volume", 00:19:33.272 "block_size": 4096, 00:19:33.272 "num_blocks": 7936, 00:19:33.272 "uuid": "0141bb61-f492-4dcf-9d6e-f583a6ee3e99", 00:19:33.272 "md_size": 32, 00:19:33.272 "md_interleave": false, 00:19:33.272 "dif_type": 0, 00:19:33.272 "assigned_rate_limits": { 00:19:33.272 "rw_ios_per_sec": 0, 00:19:33.272 "rw_mbytes_per_sec": 0, 00:19:33.272 "r_mbytes_per_sec": 0, 00:19:33.272 "w_mbytes_per_sec": 0 00:19:33.272 }, 00:19:33.272 "claimed": false, 00:19:33.272 "zoned": false, 00:19:33.272 "supported_io_types": { 00:19:33.272 "read": true, 00:19:33.272 "write": true, 00:19:33.272 "unmap": false, 00:19:33.272 "flush": false, 00:19:33.272 "reset": true, 00:19:33.272 "nvme_admin": false, 00:19:33.272 "nvme_io": false, 00:19:33.272 "nvme_io_md": false, 00:19:33.272 "write_zeroes": true, 00:19:33.272 "zcopy": false, 00:19:33.272 "get_zone_info": false, 00:19:33.272 "zone_management": false, 00:19:33.272 "zone_append": false, 00:19:33.272 "compare": false, 00:19:33.272 "compare_and_write": false, 00:19:33.272 "abort": false, 00:19:33.272 "seek_hole": false, 00:19:33.272 "seek_data": false, 00:19:33.272 "copy": false, 00:19:33.272 "nvme_iov_md": false 00:19:33.272 }, 00:19:33.272 "memory_domains": [ 00:19:33.272 { 00:19:33.272 "dma_device_id": "system", 00:19:33.272 "dma_device_type": 1 00:19:33.272 }, 00:19:33.272 { 00:19:33.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.272 "dma_device_type": 2 00:19:33.272 }, 00:19:33.272 { 00:19:33.272 "dma_device_id": "system", 00:19:33.272 "dma_device_type": 1 00:19:33.272 }, 00:19:33.272 { 00:19:33.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.272 "dma_device_type": 2 00:19:33.272 } 00:19:33.272 ], 00:19:33.272 "driver_specific": { 00:19:33.272 "raid": { 00:19:33.272 "uuid": "0141bb61-f492-4dcf-9d6e-f583a6ee3e99", 00:19:33.272 "strip_size_kb": 0, 00:19:33.272 "state": "online", 00:19:33.272 "raid_level": "raid1", 00:19:33.272 "superblock": true, 00:19:33.272 "num_base_bdevs": 2, 00:19:33.272 "num_base_bdevs_discovered": 2, 00:19:33.272 "num_base_bdevs_operational": 2, 00:19:33.272 "base_bdevs_list": [ 00:19:33.272 { 00:19:33.272 "name": "pt1", 00:19:33.272 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:33.272 "is_configured": true, 00:19:33.272 "data_offset": 256, 00:19:33.272 "data_size": 7936 00:19:33.272 }, 00:19:33.272 { 00:19:33.272 "name": "pt2", 00:19:33.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:33.272 "is_configured": true, 00:19:33.272 "data_offset": 256, 00:19:33.272 "data_size": 7936 00:19:33.272 } 00:19:33.272 ] 00:19:33.272 } 00:19:33.272 } 00:19:33.272 }' 00:19:33.272 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:33.272 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:33.272 pt2' 00:19:33.272 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:33.272 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:33.272 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:33.272 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:33.272 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:33.272 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.272 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.272 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.272 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:33.272 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:33.272 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:33.272 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:33.272 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:33.272 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.272 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.272 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.531 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:33.531 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:33.531 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:33.531 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:33.531 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.531 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.531 [2024-11-26 19:08:59.906970] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:33.531 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.531 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0141bb61-f492-4dcf-9d6e-f583a6ee3e99 00:19:33.531 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 0141bb61-f492-4dcf-9d6e-f583a6ee3e99 ']' 00:19:33.531 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:33.531 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.531 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.531 [2024-11-26 19:08:59.954600] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:33.531 [2024-11-26 19:08:59.954638] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:33.531 [2024-11-26 19:08:59.954781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:33.531 [2024-11-26 19:08:59.954887] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:33.531 [2024-11-26 19:08:59.954911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:33.531 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.531 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.531 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.531 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.531 19:08:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:33.531 19:08:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.531 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:33.531 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:33.531 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:33.531 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:33.531 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.531 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.531 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.531 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.532 [2024-11-26 19:09:00.098676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:33.532 [2024-11-26 19:09:00.101330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:33.532 [2024-11-26 19:09:00.101447] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:33.532 [2024-11-26 19:09:00.101531] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:33.532 [2024-11-26 19:09:00.101558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:33.532 [2024-11-26 19:09:00.101574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:33.532 request: 00:19:33.532 { 00:19:33.532 "name": "raid_bdev1", 00:19:33.532 "raid_level": "raid1", 00:19:33.532 "base_bdevs": [ 00:19:33.532 "malloc1", 00:19:33.532 "malloc2" 00:19:33.532 ], 00:19:33.532 "superblock": false, 00:19:33.532 "method": "bdev_raid_create", 00:19:33.532 "req_id": 1 00:19:33.532 } 00:19:33.532 Got JSON-RPC error response 00:19:33.532 response: 00:19:33.532 { 00:19:33.532 "code": -17, 00:19:33.532 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:33.532 } 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.532 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.791 [2024-11-26 19:09:00.182675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:33.791 [2024-11-26 19:09:00.182777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.791 [2024-11-26 19:09:00.182806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:33.791 [2024-11-26 19:09:00.182825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.791 [2024-11-26 19:09:00.185599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.791 [2024-11-26 19:09:00.185652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:33.791 [2024-11-26 19:09:00.185737] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:33.791 [2024-11-26 19:09:00.185817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:33.791 pt1 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.791 "name": "raid_bdev1", 00:19:33.791 "uuid": "0141bb61-f492-4dcf-9d6e-f583a6ee3e99", 00:19:33.791 "strip_size_kb": 0, 00:19:33.791 "state": "configuring", 00:19:33.791 "raid_level": "raid1", 00:19:33.791 "superblock": true, 00:19:33.791 "num_base_bdevs": 2, 00:19:33.791 "num_base_bdevs_discovered": 1, 00:19:33.791 "num_base_bdevs_operational": 2, 00:19:33.791 "base_bdevs_list": [ 00:19:33.791 { 00:19:33.791 "name": "pt1", 00:19:33.791 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:33.791 "is_configured": true, 00:19:33.791 "data_offset": 256, 00:19:33.791 "data_size": 7936 00:19:33.791 }, 00:19:33.791 { 00:19:33.791 "name": null, 00:19:33.791 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:33.791 "is_configured": false, 00:19:33.791 "data_offset": 256, 00:19:33.791 "data_size": 7936 00:19:33.791 } 00:19:33.791 ] 00:19:33.791 }' 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.791 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.358 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:34.358 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:34.358 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:34.358 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:34.358 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.358 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.358 [2024-11-26 19:09:00.714767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:34.358 [2024-11-26 19:09:00.714870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.359 [2024-11-26 19:09:00.714904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:34.359 [2024-11-26 19:09:00.714924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.359 [2024-11-26 19:09:00.715236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.359 [2024-11-26 19:09:00.715278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:34.359 [2024-11-26 19:09:00.715372] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:34.359 [2024-11-26 19:09:00.715410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:34.359 [2024-11-26 19:09:00.715557] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:34.359 [2024-11-26 19:09:00.715587] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:34.359 [2024-11-26 19:09:00.715686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:34.359 [2024-11-26 19:09:00.715838] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:34.359 [2024-11-26 19:09:00.715862] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:34.359 [2024-11-26 19:09:00.715990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.359 pt2 00:19:34.359 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.359 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:34.359 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:34.359 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:34.359 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:34.359 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:34.359 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:34.359 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:34.359 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:34.359 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.359 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.359 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.359 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.359 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.359 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.359 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.359 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.359 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.359 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.359 "name": "raid_bdev1", 00:19:34.359 "uuid": "0141bb61-f492-4dcf-9d6e-f583a6ee3e99", 00:19:34.359 "strip_size_kb": 0, 00:19:34.359 "state": "online", 00:19:34.359 "raid_level": "raid1", 00:19:34.359 "superblock": true, 00:19:34.359 "num_base_bdevs": 2, 00:19:34.359 "num_base_bdevs_discovered": 2, 00:19:34.359 "num_base_bdevs_operational": 2, 00:19:34.359 "base_bdevs_list": [ 00:19:34.359 { 00:19:34.359 "name": "pt1", 00:19:34.359 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:34.359 "is_configured": true, 00:19:34.359 "data_offset": 256, 00:19:34.359 "data_size": 7936 00:19:34.359 }, 00:19:34.359 { 00:19:34.359 "name": "pt2", 00:19:34.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:34.359 "is_configured": true, 00:19:34.359 "data_offset": 256, 00:19:34.359 "data_size": 7936 00:19:34.359 } 00:19:34.359 ] 00:19:34.359 }' 00:19:34.359 19:09:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.359 19:09:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.618 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:34.618 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:34.618 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:34.618 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:34.618 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:34.618 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:34.877 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:34.877 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:34.877 19:09:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.877 19:09:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.877 [2024-11-26 19:09:01.247249] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:34.877 19:09:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.877 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:34.877 "name": "raid_bdev1", 00:19:34.877 "aliases": [ 00:19:34.877 "0141bb61-f492-4dcf-9d6e-f583a6ee3e99" 00:19:34.877 ], 00:19:34.877 "product_name": "Raid Volume", 00:19:34.877 "block_size": 4096, 00:19:34.877 "num_blocks": 7936, 00:19:34.877 "uuid": "0141bb61-f492-4dcf-9d6e-f583a6ee3e99", 00:19:34.877 "md_size": 32, 00:19:34.877 "md_interleave": false, 00:19:34.877 "dif_type": 0, 00:19:34.877 "assigned_rate_limits": { 00:19:34.877 "rw_ios_per_sec": 0, 00:19:34.877 "rw_mbytes_per_sec": 0, 00:19:34.877 "r_mbytes_per_sec": 0, 00:19:34.877 "w_mbytes_per_sec": 0 00:19:34.877 }, 00:19:34.877 "claimed": false, 00:19:34.877 "zoned": false, 00:19:34.877 "supported_io_types": { 00:19:34.877 "read": true, 00:19:34.877 "write": true, 00:19:34.877 "unmap": false, 00:19:34.877 "flush": false, 00:19:34.877 "reset": true, 00:19:34.877 "nvme_admin": false, 00:19:34.877 "nvme_io": false, 00:19:34.877 "nvme_io_md": false, 00:19:34.877 "write_zeroes": true, 00:19:34.877 "zcopy": false, 00:19:34.877 "get_zone_info": false, 00:19:34.877 "zone_management": false, 00:19:34.877 "zone_append": false, 00:19:34.877 "compare": false, 00:19:34.877 "compare_and_write": false, 00:19:34.877 "abort": false, 00:19:34.877 "seek_hole": false, 00:19:34.877 "seek_data": false, 00:19:34.877 "copy": false, 00:19:34.877 "nvme_iov_md": false 00:19:34.877 }, 00:19:34.877 "memory_domains": [ 00:19:34.877 { 00:19:34.877 "dma_device_id": "system", 00:19:34.877 "dma_device_type": 1 00:19:34.877 }, 00:19:34.877 { 00:19:34.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.877 "dma_device_type": 2 00:19:34.877 }, 00:19:34.877 { 00:19:34.877 "dma_device_id": "system", 00:19:34.877 "dma_device_type": 1 00:19:34.877 }, 00:19:34.877 { 00:19:34.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.877 "dma_device_type": 2 00:19:34.877 } 00:19:34.877 ], 00:19:34.877 "driver_specific": { 00:19:34.877 "raid": { 00:19:34.877 "uuid": "0141bb61-f492-4dcf-9d6e-f583a6ee3e99", 00:19:34.877 "strip_size_kb": 0, 00:19:34.877 "state": "online", 00:19:34.877 "raid_level": "raid1", 00:19:34.877 "superblock": true, 00:19:34.877 "num_base_bdevs": 2, 00:19:34.877 "num_base_bdevs_discovered": 2, 00:19:34.877 "num_base_bdevs_operational": 2, 00:19:34.877 "base_bdevs_list": [ 00:19:34.877 { 00:19:34.877 "name": "pt1", 00:19:34.877 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:34.877 "is_configured": true, 00:19:34.877 "data_offset": 256, 00:19:34.877 "data_size": 7936 00:19:34.877 }, 00:19:34.877 { 00:19:34.877 "name": "pt2", 00:19:34.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:34.877 "is_configured": true, 00:19:34.878 "data_offset": 256, 00:19:34.878 "data_size": 7936 00:19:34.878 } 00:19:34.878 ] 00:19:34.878 } 00:19:34.878 } 00:19:34.878 }' 00:19:34.878 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:34.878 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:34.878 pt2' 00:19:34.878 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.878 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:34.878 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:34.878 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:34.878 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.878 19:09:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.878 19:09:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.878 19:09:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.878 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:34.878 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:34.878 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:34.878 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.878 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:34.878 19:09:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.878 19:09:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.878 19:09:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.136 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:35.136 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:35.136 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:35.136 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:35.136 19:09:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.137 [2024-11-26 19:09:01.511425] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 0141bb61-f492-4dcf-9d6e-f583a6ee3e99 '!=' 0141bb61-f492-4dcf-9d6e-f583a6ee3e99 ']' 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.137 [2024-11-26 19:09:01.555080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.137 "name": "raid_bdev1", 00:19:35.137 "uuid": "0141bb61-f492-4dcf-9d6e-f583a6ee3e99", 00:19:35.137 "strip_size_kb": 0, 00:19:35.137 "state": "online", 00:19:35.137 "raid_level": "raid1", 00:19:35.137 "superblock": true, 00:19:35.137 "num_base_bdevs": 2, 00:19:35.137 "num_base_bdevs_discovered": 1, 00:19:35.137 "num_base_bdevs_operational": 1, 00:19:35.137 "base_bdevs_list": [ 00:19:35.137 { 00:19:35.137 "name": null, 00:19:35.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.137 "is_configured": false, 00:19:35.137 "data_offset": 0, 00:19:35.137 "data_size": 7936 00:19:35.137 }, 00:19:35.137 { 00:19:35.137 "name": "pt2", 00:19:35.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:35.137 "is_configured": true, 00:19:35.137 "data_offset": 256, 00:19:35.137 "data_size": 7936 00:19:35.137 } 00:19:35.137 ] 00:19:35.137 }' 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.137 19:09:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.703 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:35.703 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.703 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.703 [2024-11-26 19:09:02.107189] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:35.703 [2024-11-26 19:09:02.107230] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:35.703 [2024-11-26 19:09:02.107359] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:35.703 [2024-11-26 19:09:02.107435] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:35.703 [2024-11-26 19:09:02.107456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:35.703 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.703 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.703 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.703 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.703 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:35.703 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.703 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:35.703 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:35.703 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:35.703 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:35.703 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:35.703 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.704 [2024-11-26 19:09:02.179192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:35.704 [2024-11-26 19:09:02.179273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.704 [2024-11-26 19:09:02.179321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:35.704 [2024-11-26 19:09:02.179343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.704 [2024-11-26 19:09:02.182134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.704 [2024-11-26 19:09:02.182185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:35.704 [2024-11-26 19:09:02.182266] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:35.704 [2024-11-26 19:09:02.182354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:35.704 [2024-11-26 19:09:02.182482] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:35.704 [2024-11-26 19:09:02.182504] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:35.704 [2024-11-26 19:09:02.182620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:35.704 [2024-11-26 19:09:02.182779] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:35.704 [2024-11-26 19:09:02.182804] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:35.704 [2024-11-26 19:09:02.182934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.704 pt2 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.704 "name": "raid_bdev1", 00:19:35.704 "uuid": "0141bb61-f492-4dcf-9d6e-f583a6ee3e99", 00:19:35.704 "strip_size_kb": 0, 00:19:35.704 "state": "online", 00:19:35.704 "raid_level": "raid1", 00:19:35.704 "superblock": true, 00:19:35.704 "num_base_bdevs": 2, 00:19:35.704 "num_base_bdevs_discovered": 1, 00:19:35.704 "num_base_bdevs_operational": 1, 00:19:35.704 "base_bdevs_list": [ 00:19:35.704 { 00:19:35.704 "name": null, 00:19:35.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.704 "is_configured": false, 00:19:35.704 "data_offset": 256, 00:19:35.704 "data_size": 7936 00:19:35.704 }, 00:19:35.704 { 00:19:35.704 "name": "pt2", 00:19:35.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:35.704 "is_configured": true, 00:19:35.704 "data_offset": 256, 00:19:35.704 "data_size": 7936 00:19:35.704 } 00:19:35.704 ] 00:19:35.704 }' 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.704 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.271 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:36.271 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.271 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.271 [2024-11-26 19:09:02.663311] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:36.271 [2024-11-26 19:09:02.663352] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:36.271 [2024-11-26 19:09:02.663458] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:36.271 [2024-11-26 19:09:02.663551] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:36.271 [2024-11-26 19:09:02.663568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:36.271 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.271 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.271 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.271 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.271 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:36.271 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.271 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:36.271 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:36.271 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:36.271 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:36.271 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.271 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.271 [2024-11-26 19:09:02.727377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:36.271 [2024-11-26 19:09:02.727462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.271 [2024-11-26 19:09:02.727496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:36.271 [2024-11-26 19:09:02.727513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.271 [2024-11-26 19:09:02.730317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.271 [2024-11-26 19:09:02.730361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:36.271 [2024-11-26 19:09:02.730450] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:36.272 [2024-11-26 19:09:02.730514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:36.272 [2024-11-26 19:09:02.730684] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:36.272 [2024-11-26 19:09:02.730703] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:36.272 [2024-11-26 19:09:02.730730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:36.272 [2024-11-26 19:09:02.730817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:36.272 [2024-11-26 19:09:02.730926] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:36.272 [2024-11-26 19:09:02.730952] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:36.272 [2024-11-26 19:09:02.731049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:36.272 [2024-11-26 19:09:02.731189] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:36.272 [2024-11-26 19:09:02.731218] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:36.272 [2024-11-26 19:09:02.731424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.272 pt1 00:19:36.272 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.272 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:36.272 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:36.272 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.272 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.272 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:36.272 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:36.272 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:36.272 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.272 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.272 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.272 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.272 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.272 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.272 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.272 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.272 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.272 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.272 "name": "raid_bdev1", 00:19:36.272 "uuid": "0141bb61-f492-4dcf-9d6e-f583a6ee3e99", 00:19:36.272 "strip_size_kb": 0, 00:19:36.272 "state": "online", 00:19:36.272 "raid_level": "raid1", 00:19:36.272 "superblock": true, 00:19:36.272 "num_base_bdevs": 2, 00:19:36.272 "num_base_bdevs_discovered": 1, 00:19:36.272 "num_base_bdevs_operational": 1, 00:19:36.272 "base_bdevs_list": [ 00:19:36.272 { 00:19:36.272 "name": null, 00:19:36.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.272 "is_configured": false, 00:19:36.272 "data_offset": 256, 00:19:36.272 "data_size": 7936 00:19:36.272 }, 00:19:36.272 { 00:19:36.272 "name": "pt2", 00:19:36.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:36.272 "is_configured": true, 00:19:36.272 "data_offset": 256, 00:19:36.272 "data_size": 7936 00:19:36.272 } 00:19:36.272 ] 00:19:36.272 }' 00:19:36.272 19:09:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.272 19:09:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.839 [2024-11-26 19:09:03.292025] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 0141bb61-f492-4dcf-9d6e-f583a6ee3e99 '!=' 0141bb61-f492-4dcf-9d6e-f583a6ee3e99 ']' 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 88414 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88414 ']' 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 88414 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88414 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88414' 00:19:36.839 killing process with pid 88414 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 88414 00:19:36.839 [2024-11-26 19:09:03.376589] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:36.839 19:09:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 88414 00:19:36.839 [2024-11-26 19:09:03.376712] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:36.839 [2024-11-26 19:09:03.376808] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:36.839 [2024-11-26 19:09:03.376844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:37.098 [2024-11-26 19:09:03.587745] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:38.475 19:09:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:19:38.475 00:19:38.475 real 0m7.008s 00:19:38.475 user 0m10.953s 00:19:38.475 sys 0m1.112s 00:19:38.475 19:09:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:38.475 19:09:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.475 ************************************ 00:19:38.475 END TEST raid_superblock_test_md_separate 00:19:38.475 ************************************ 00:19:38.475 19:09:04 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:19:38.475 19:09:04 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:19:38.475 19:09:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:38.475 19:09:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:38.475 19:09:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:38.475 ************************************ 00:19:38.475 START TEST raid_rebuild_test_sb_md_separate 00:19:38.475 ************************************ 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88748 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88748 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88748 ']' 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.475 19:09:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.475 [2024-11-26 19:09:04.898281] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:19:38.475 [2024-11-26 19:09:04.898477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88748 ] 00:19:38.475 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:38.475 Zero copy mechanism will not be used. 00:19:38.475 [2024-11-26 19:09:05.075289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.733 [2024-11-26 19:09:05.222225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.992 [2024-11-26 19:09:05.450851] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:38.992 [2024-11-26 19:09:05.450956] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:39.250 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.250 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:39.250 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:39.250 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:19:39.250 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.250 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.509 BaseBdev1_malloc 00:19:39.509 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.509 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:39.509 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.509 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.509 [2024-11-26 19:09:05.917977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:39.509 [2024-11-26 19:09:05.918063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.509 [2024-11-26 19:09:05.918106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:39.509 [2024-11-26 19:09:05.918126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.509 [2024-11-26 19:09:05.920855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.509 [2024-11-26 19:09:05.920898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:39.509 BaseBdev1 00:19:39.509 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.509 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:39.509 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:19:39.509 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.509 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.509 BaseBdev2_malloc 00:19:39.509 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.509 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:39.509 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.509 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.509 [2024-11-26 19:09:05.975633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:39.509 [2024-11-26 19:09:05.975735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.509 [2024-11-26 19:09:05.975773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:39.509 [2024-11-26 19:09:05.975792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.509 [2024-11-26 19:09:05.978609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.509 [2024-11-26 19:09:05.978659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:39.509 BaseBdev2 00:19:39.509 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.509 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:19:39.509 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.509 19:09:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.509 spare_malloc 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.509 spare_delay 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.509 [2024-11-26 19:09:06.067786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:39.509 [2024-11-26 19:09:06.067867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.509 [2024-11-26 19:09:06.067903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:39.509 [2024-11-26 19:09:06.067923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.509 [2024-11-26 19:09:06.070663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.509 [2024-11-26 19:09:06.070711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:39.509 spare 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.509 [2024-11-26 19:09:06.075845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:39.509 [2024-11-26 19:09:06.078437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:39.509 [2024-11-26 19:09:06.078707] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:39.509 [2024-11-26 19:09:06.078742] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:39.509 [2024-11-26 19:09:06.078874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:39.509 [2024-11-26 19:09:06.079067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:39.509 [2024-11-26 19:09:06.079094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:39.509 [2024-11-26 19:09:06.079243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.509 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.767 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.767 "name": "raid_bdev1", 00:19:39.767 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:39.767 "strip_size_kb": 0, 00:19:39.767 "state": "online", 00:19:39.767 "raid_level": "raid1", 00:19:39.767 "superblock": true, 00:19:39.767 "num_base_bdevs": 2, 00:19:39.767 "num_base_bdevs_discovered": 2, 00:19:39.767 "num_base_bdevs_operational": 2, 00:19:39.767 "base_bdevs_list": [ 00:19:39.767 { 00:19:39.767 "name": "BaseBdev1", 00:19:39.767 "uuid": "df883c2f-8fb5-557f-a5e6-f1b2314a4166", 00:19:39.767 "is_configured": true, 00:19:39.767 "data_offset": 256, 00:19:39.767 "data_size": 7936 00:19:39.767 }, 00:19:39.767 { 00:19:39.767 "name": "BaseBdev2", 00:19:39.767 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:39.767 "is_configured": true, 00:19:39.767 "data_offset": 256, 00:19:39.767 "data_size": 7936 00:19:39.767 } 00:19:39.767 ] 00:19:39.767 }' 00:19:39.767 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.767 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.025 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:40.025 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.025 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.025 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:40.025 [2024-11-26 19:09:06.620370] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:40.025 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.282 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:40.282 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.282 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:40.282 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.282 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.282 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.283 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:40.283 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:40.283 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:40.283 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:40.283 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:40.283 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:40.283 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:40.283 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:40.283 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:40.283 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:40.283 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:40.283 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:40.283 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:40.283 19:09:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:40.540 [2024-11-26 19:09:07.032183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:40.540 /dev/nbd0 00:19:40.540 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:40.540 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:40.540 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:40.540 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:40.540 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:40.540 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:40.540 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:40.540 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:40.540 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:40.540 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:40.540 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:40.540 1+0 records in 00:19:40.540 1+0 records out 00:19:40.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302971 s, 13.5 MB/s 00:19:40.540 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.540 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:40.540 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.540 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:40.540 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:40.540 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:40.540 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:40.540 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:40.540 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:40.540 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:41.475 7936+0 records in 00:19:41.475 7936+0 records out 00:19:41.475 32505856 bytes (33 MB, 31 MiB) copied, 0.886749 s, 36.7 MB/s 00:19:41.475 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:41.475 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:41.475 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:41.475 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:41.475 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:41.475 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:41.475 19:09:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:41.734 [2024-11-26 19:09:08.292559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.734 [2024-11-26 19:09:08.304718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.734 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.734 "name": "raid_bdev1", 00:19:41.734 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:41.734 "strip_size_kb": 0, 00:19:41.734 "state": "online", 00:19:41.734 "raid_level": "raid1", 00:19:41.734 "superblock": true, 00:19:41.734 "num_base_bdevs": 2, 00:19:41.734 "num_base_bdevs_discovered": 1, 00:19:41.734 "num_base_bdevs_operational": 1, 00:19:41.734 "base_bdevs_list": [ 00:19:41.734 { 00:19:41.734 "name": null, 00:19:41.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.734 "is_configured": false, 00:19:41.734 "data_offset": 0, 00:19:41.734 "data_size": 7936 00:19:41.734 }, 00:19:41.734 { 00:19:41.734 "name": "BaseBdev2", 00:19:41.734 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:41.735 "is_configured": true, 00:19:41.735 "data_offset": 256, 00:19:41.735 "data_size": 7936 00:19:41.735 } 00:19:41.735 ] 00:19:41.735 }' 00:19:41.735 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.735 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.302 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:42.302 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.302 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.302 [2024-11-26 19:09:08.756820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:42.302 [2024-11-26 19:09:08.771380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:42.302 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.302 19:09:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:42.302 [2024-11-26 19:09:08.774105] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:43.235 19:09:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:43.235 19:09:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:43.235 19:09:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:43.235 19:09:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:43.235 19:09:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:43.235 19:09:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.235 19:09:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.235 19:09:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.235 19:09:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.235 19:09:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.235 19:09:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.235 "name": "raid_bdev1", 00:19:43.235 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:43.235 "strip_size_kb": 0, 00:19:43.235 "state": "online", 00:19:43.235 "raid_level": "raid1", 00:19:43.236 "superblock": true, 00:19:43.236 "num_base_bdevs": 2, 00:19:43.236 "num_base_bdevs_discovered": 2, 00:19:43.236 "num_base_bdevs_operational": 2, 00:19:43.236 "process": { 00:19:43.236 "type": "rebuild", 00:19:43.236 "target": "spare", 00:19:43.236 "progress": { 00:19:43.236 "blocks": 2560, 00:19:43.236 "percent": 32 00:19:43.236 } 00:19:43.236 }, 00:19:43.236 "base_bdevs_list": [ 00:19:43.236 { 00:19:43.236 "name": "spare", 00:19:43.236 "uuid": "fd4fdd4f-9066-5279-832a-8f0c54a05645", 00:19:43.236 "is_configured": true, 00:19:43.236 "data_offset": 256, 00:19:43.236 "data_size": 7936 00:19:43.236 }, 00:19:43.236 { 00:19:43.236 "name": "BaseBdev2", 00:19:43.236 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:43.236 "is_configured": true, 00:19:43.236 "data_offset": 256, 00:19:43.236 "data_size": 7936 00:19:43.236 } 00:19:43.236 ] 00:19:43.236 }' 00:19:43.236 19:09:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:43.494 19:09:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:43.494 19:09:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:43.494 19:09:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:43.494 19:09:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:43.494 19:09:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.494 19:09:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.494 [2024-11-26 19:09:09.928204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:43.494 [2024-11-26 19:09:09.986040] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:43.494 [2024-11-26 19:09:09.986409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.494 [2024-11-26 19:09:09.986572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:43.494 [2024-11-26 19:09:09.986634] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:43.494 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.494 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:43.494 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.494 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.494 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.494 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.494 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:43.494 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.494 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.494 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.494 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.494 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.494 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.494 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.494 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.494 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.494 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.494 "name": "raid_bdev1", 00:19:43.494 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:43.494 "strip_size_kb": 0, 00:19:43.494 "state": "online", 00:19:43.494 "raid_level": "raid1", 00:19:43.494 "superblock": true, 00:19:43.494 "num_base_bdevs": 2, 00:19:43.494 "num_base_bdevs_discovered": 1, 00:19:43.494 "num_base_bdevs_operational": 1, 00:19:43.494 "base_bdevs_list": [ 00:19:43.494 { 00:19:43.494 "name": null, 00:19:43.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.494 "is_configured": false, 00:19:43.494 "data_offset": 0, 00:19:43.494 "data_size": 7936 00:19:43.494 }, 00:19:43.494 { 00:19:43.494 "name": "BaseBdev2", 00:19:43.494 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:43.494 "is_configured": true, 00:19:43.494 "data_offset": 256, 00:19:43.494 "data_size": 7936 00:19:43.494 } 00:19:43.494 ] 00:19:43.494 }' 00:19:43.494 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.494 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:44.060 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:44.060 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:44.060 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:44.060 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:44.060 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:44.060 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.060 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.060 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:44.060 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.060 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.060 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:44.060 "name": "raid_bdev1", 00:19:44.060 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:44.060 "strip_size_kb": 0, 00:19:44.060 "state": "online", 00:19:44.060 "raid_level": "raid1", 00:19:44.060 "superblock": true, 00:19:44.060 "num_base_bdevs": 2, 00:19:44.060 "num_base_bdevs_discovered": 1, 00:19:44.060 "num_base_bdevs_operational": 1, 00:19:44.060 "base_bdevs_list": [ 00:19:44.060 { 00:19:44.060 "name": null, 00:19:44.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.060 "is_configured": false, 00:19:44.060 "data_offset": 0, 00:19:44.060 "data_size": 7936 00:19:44.060 }, 00:19:44.060 { 00:19:44.060 "name": "BaseBdev2", 00:19:44.060 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:44.060 "is_configured": true, 00:19:44.060 "data_offset": 256, 00:19:44.060 "data_size": 7936 00:19:44.060 } 00:19:44.060 ] 00:19:44.060 }' 00:19:44.060 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:44.060 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:44.060 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:44.060 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:44.060 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:44.060 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.060 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:44.060 [2024-11-26 19:09:10.662807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:44.060 [2024-11-26 19:09:10.676151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:44.060 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.060 19:09:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:44.060 [2024-11-26 19:09:10.678814] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:45.437 "name": "raid_bdev1", 00:19:45.437 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:45.437 "strip_size_kb": 0, 00:19:45.437 "state": "online", 00:19:45.437 "raid_level": "raid1", 00:19:45.437 "superblock": true, 00:19:45.437 "num_base_bdevs": 2, 00:19:45.437 "num_base_bdevs_discovered": 2, 00:19:45.437 "num_base_bdevs_operational": 2, 00:19:45.437 "process": { 00:19:45.437 "type": "rebuild", 00:19:45.437 "target": "spare", 00:19:45.437 "progress": { 00:19:45.437 "blocks": 2560, 00:19:45.437 "percent": 32 00:19:45.437 } 00:19:45.437 }, 00:19:45.437 "base_bdevs_list": [ 00:19:45.437 { 00:19:45.437 "name": "spare", 00:19:45.437 "uuid": "fd4fdd4f-9066-5279-832a-8f0c54a05645", 00:19:45.437 "is_configured": true, 00:19:45.437 "data_offset": 256, 00:19:45.437 "data_size": 7936 00:19:45.437 }, 00:19:45.437 { 00:19:45.437 "name": "BaseBdev2", 00:19:45.437 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:45.437 "is_configured": true, 00:19:45.437 "data_offset": 256, 00:19:45.437 "data_size": 7936 00:19:45.437 } 00:19:45.437 ] 00:19:45.437 }' 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:45.437 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=789 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.437 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:45.437 "name": "raid_bdev1", 00:19:45.437 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:45.437 "strip_size_kb": 0, 00:19:45.437 "state": "online", 00:19:45.437 "raid_level": "raid1", 00:19:45.437 "superblock": true, 00:19:45.437 "num_base_bdevs": 2, 00:19:45.437 "num_base_bdevs_discovered": 2, 00:19:45.437 "num_base_bdevs_operational": 2, 00:19:45.437 "process": { 00:19:45.437 "type": "rebuild", 00:19:45.437 "target": "spare", 00:19:45.437 "progress": { 00:19:45.437 "blocks": 2816, 00:19:45.437 "percent": 35 00:19:45.437 } 00:19:45.437 }, 00:19:45.437 "base_bdevs_list": [ 00:19:45.437 { 00:19:45.437 "name": "spare", 00:19:45.437 "uuid": "fd4fdd4f-9066-5279-832a-8f0c54a05645", 00:19:45.437 "is_configured": true, 00:19:45.437 "data_offset": 256, 00:19:45.437 "data_size": 7936 00:19:45.437 }, 00:19:45.437 { 00:19:45.437 "name": "BaseBdev2", 00:19:45.437 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:45.437 "is_configured": true, 00:19:45.438 "data_offset": 256, 00:19:45.438 "data_size": 7936 00:19:45.438 } 00:19:45.438 ] 00:19:45.438 }' 00:19:45.438 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:45.438 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:45.438 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:45.438 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:45.438 19:09:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:46.424 19:09:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:46.424 19:09:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:46.424 19:09:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:46.424 19:09:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:46.424 19:09:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:46.425 19:09:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:46.425 19:09:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.425 19:09:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.425 19:09:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.425 19:09:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.425 19:09:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.703 19:09:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:46.703 "name": "raid_bdev1", 00:19:46.703 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:46.703 "strip_size_kb": 0, 00:19:46.703 "state": "online", 00:19:46.703 "raid_level": "raid1", 00:19:46.703 "superblock": true, 00:19:46.703 "num_base_bdevs": 2, 00:19:46.703 "num_base_bdevs_discovered": 2, 00:19:46.703 "num_base_bdevs_operational": 2, 00:19:46.703 "process": { 00:19:46.703 "type": "rebuild", 00:19:46.703 "target": "spare", 00:19:46.703 "progress": { 00:19:46.703 "blocks": 5632, 00:19:46.703 "percent": 70 00:19:46.703 } 00:19:46.703 }, 00:19:46.703 "base_bdevs_list": [ 00:19:46.703 { 00:19:46.703 "name": "spare", 00:19:46.703 "uuid": "fd4fdd4f-9066-5279-832a-8f0c54a05645", 00:19:46.703 "is_configured": true, 00:19:46.703 "data_offset": 256, 00:19:46.703 "data_size": 7936 00:19:46.703 }, 00:19:46.703 { 00:19:46.703 "name": "BaseBdev2", 00:19:46.703 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:46.703 "is_configured": true, 00:19:46.703 "data_offset": 256, 00:19:46.703 "data_size": 7936 00:19:46.703 } 00:19:46.703 ] 00:19:46.703 }' 00:19:46.703 19:09:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:46.703 19:09:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:46.703 19:09:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:46.703 19:09:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:46.703 19:09:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:47.270 [2024-11-26 19:09:13.808231] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:47.270 [2024-11-26 19:09:13.808378] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:47.270 [2024-11-26 19:09:13.808570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.836 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:47.836 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:47.836 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.836 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:47.836 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:47.836 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.836 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.836 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.836 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.836 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.836 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.836 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.836 "name": "raid_bdev1", 00:19:47.836 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:47.836 "strip_size_kb": 0, 00:19:47.836 "state": "online", 00:19:47.836 "raid_level": "raid1", 00:19:47.836 "superblock": true, 00:19:47.836 "num_base_bdevs": 2, 00:19:47.836 "num_base_bdevs_discovered": 2, 00:19:47.836 "num_base_bdevs_operational": 2, 00:19:47.836 "base_bdevs_list": [ 00:19:47.836 { 00:19:47.836 "name": "spare", 00:19:47.836 "uuid": "fd4fdd4f-9066-5279-832a-8f0c54a05645", 00:19:47.836 "is_configured": true, 00:19:47.836 "data_offset": 256, 00:19:47.836 "data_size": 7936 00:19:47.836 }, 00:19:47.836 { 00:19:47.836 "name": "BaseBdev2", 00:19:47.836 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:47.836 "is_configured": true, 00:19:47.836 "data_offset": 256, 00:19:47.836 "data_size": 7936 00:19:47.836 } 00:19:47.836 ] 00:19:47.836 }' 00:19:47.836 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.836 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:47.836 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.836 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:47.836 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:19:47.836 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:47.836 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.837 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:47.837 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:47.837 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.837 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.837 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.837 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.837 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.837 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.837 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.837 "name": "raid_bdev1", 00:19:47.837 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:47.837 "strip_size_kb": 0, 00:19:47.837 "state": "online", 00:19:47.837 "raid_level": "raid1", 00:19:47.837 "superblock": true, 00:19:47.837 "num_base_bdevs": 2, 00:19:47.837 "num_base_bdevs_discovered": 2, 00:19:47.837 "num_base_bdevs_operational": 2, 00:19:47.837 "base_bdevs_list": [ 00:19:47.837 { 00:19:47.837 "name": "spare", 00:19:47.837 "uuid": "fd4fdd4f-9066-5279-832a-8f0c54a05645", 00:19:47.837 "is_configured": true, 00:19:47.837 "data_offset": 256, 00:19:47.837 "data_size": 7936 00:19:47.837 }, 00:19:47.837 { 00:19:47.837 "name": "BaseBdev2", 00:19:47.837 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:47.837 "is_configured": true, 00:19:47.837 "data_offset": 256, 00:19:47.837 "data_size": 7936 00:19:47.837 } 00:19:47.837 ] 00:19:47.837 }' 00:19:47.837 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.837 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:47.837 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:48.096 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:48.096 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:48.096 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:48.096 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:48.096 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:48.096 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:48.096 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:48.096 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.096 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.096 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.096 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.096 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.096 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.096 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.096 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.096 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.096 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.096 "name": "raid_bdev1", 00:19:48.096 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:48.096 "strip_size_kb": 0, 00:19:48.096 "state": "online", 00:19:48.096 "raid_level": "raid1", 00:19:48.096 "superblock": true, 00:19:48.096 "num_base_bdevs": 2, 00:19:48.096 "num_base_bdevs_discovered": 2, 00:19:48.096 "num_base_bdevs_operational": 2, 00:19:48.096 "base_bdevs_list": [ 00:19:48.096 { 00:19:48.096 "name": "spare", 00:19:48.096 "uuid": "fd4fdd4f-9066-5279-832a-8f0c54a05645", 00:19:48.096 "is_configured": true, 00:19:48.096 "data_offset": 256, 00:19:48.096 "data_size": 7936 00:19:48.096 }, 00:19:48.096 { 00:19:48.096 "name": "BaseBdev2", 00:19:48.096 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:48.096 "is_configured": true, 00:19:48.096 "data_offset": 256, 00:19:48.096 "data_size": 7936 00:19:48.096 } 00:19:48.096 ] 00:19:48.096 }' 00:19:48.096 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.096 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.664 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:48.664 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.664 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.664 [2024-11-26 19:09:14.996493] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:48.664 [2024-11-26 19:09:14.996539] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:48.664 [2024-11-26 19:09:14.996665] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:48.664 [2024-11-26 19:09:14.996768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:48.664 [2024-11-26 19:09:14.996786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:48.664 19:09:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.664 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.664 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.664 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.664 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:19:48.664 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.664 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:48.664 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:48.664 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:48.664 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:48.664 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:48.664 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:48.664 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:48.664 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:48.664 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:48.664 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:48.664 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:48.664 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:48.664 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:48.924 /dev/nbd0 00:19:48.924 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:48.924 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:48.924 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:48.924 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:48.924 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:48.924 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:48.924 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:48.924 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:48.924 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:48.924 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:48.924 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:48.924 1+0 records in 00:19:48.924 1+0 records out 00:19:48.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353754 s, 11.6 MB/s 00:19:48.924 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.924 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:48.924 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.924 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:48.924 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:48.924 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:48.924 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:48.924 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:49.183 /dev/nbd1 00:19:49.183 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:49.183 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:49.183 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:49.183 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:49.183 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:49.183 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:49.183 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:49.183 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:49.183 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:49.183 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:49.183 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:49.183 1+0 records in 00:19:49.183 1+0 records out 00:19:49.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442946 s, 9.2 MB/s 00:19:49.183 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:49.183 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:49.183 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:49.183 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:49.183 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:49.183 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:49.183 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:49.183 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:49.441 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:49.441 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:49.441 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:49.441 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:49.441 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:49.441 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:49.441 19:09:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:49.700 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:49.700 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:49.700 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:49.700 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:49.700 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:49.700 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:49.700 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:49.700 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:49.700 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:49.700 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.959 [2024-11-26 19:09:16.467653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:49.959 [2024-11-26 19:09:16.467720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:49.959 [2024-11-26 19:09:16.467755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:49.959 [2024-11-26 19:09:16.467772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:49.959 [2024-11-26 19:09:16.470583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:49.959 [2024-11-26 19:09:16.470627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:49.959 [2024-11-26 19:09:16.470760] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:49.959 [2024-11-26 19:09:16.470836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:49.959 [2024-11-26 19:09:16.470980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:49.959 spare 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.959 [2024-11-26 19:09:16.571112] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:49.959 [2024-11-26 19:09:16.571201] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:49.959 [2024-11-26 19:09:16.571426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:49.959 [2024-11-26 19:09:16.571674] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:49.959 [2024-11-26 19:09:16.571694] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:49.959 [2024-11-26 19:09:16.571908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.959 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.218 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.218 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.218 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.218 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.218 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.218 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.218 "name": "raid_bdev1", 00:19:50.218 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:50.218 "strip_size_kb": 0, 00:19:50.218 "state": "online", 00:19:50.218 "raid_level": "raid1", 00:19:50.218 "superblock": true, 00:19:50.218 "num_base_bdevs": 2, 00:19:50.218 "num_base_bdevs_discovered": 2, 00:19:50.218 "num_base_bdevs_operational": 2, 00:19:50.218 "base_bdevs_list": [ 00:19:50.218 { 00:19:50.218 "name": "spare", 00:19:50.218 "uuid": "fd4fdd4f-9066-5279-832a-8f0c54a05645", 00:19:50.218 "is_configured": true, 00:19:50.218 "data_offset": 256, 00:19:50.218 "data_size": 7936 00:19:50.218 }, 00:19:50.218 { 00:19:50.218 "name": "BaseBdev2", 00:19:50.218 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:50.218 "is_configured": true, 00:19:50.218 "data_offset": 256, 00:19:50.218 "data_size": 7936 00:19:50.218 } 00:19:50.218 ] 00:19:50.218 }' 00:19:50.218 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.218 19:09:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.476 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:50.476 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.476 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:50.476 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:50.476 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.476 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.476 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.476 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.477 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.477 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.735 "name": "raid_bdev1", 00:19:50.735 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:50.735 "strip_size_kb": 0, 00:19:50.735 "state": "online", 00:19:50.735 "raid_level": "raid1", 00:19:50.735 "superblock": true, 00:19:50.735 "num_base_bdevs": 2, 00:19:50.735 "num_base_bdevs_discovered": 2, 00:19:50.735 "num_base_bdevs_operational": 2, 00:19:50.735 "base_bdevs_list": [ 00:19:50.735 { 00:19:50.735 "name": "spare", 00:19:50.735 "uuid": "fd4fdd4f-9066-5279-832a-8f0c54a05645", 00:19:50.735 "is_configured": true, 00:19:50.735 "data_offset": 256, 00:19:50.735 "data_size": 7936 00:19:50.735 }, 00:19:50.735 { 00:19:50.735 "name": "BaseBdev2", 00:19:50.735 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:50.735 "is_configured": true, 00:19:50.735 "data_offset": 256, 00:19:50.735 "data_size": 7936 00:19:50.735 } 00:19:50.735 ] 00:19:50.735 }' 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.735 [2024-11-26 19:09:17.276138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.735 "name": "raid_bdev1", 00:19:50.735 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:50.735 "strip_size_kb": 0, 00:19:50.735 "state": "online", 00:19:50.735 "raid_level": "raid1", 00:19:50.735 "superblock": true, 00:19:50.735 "num_base_bdevs": 2, 00:19:50.735 "num_base_bdevs_discovered": 1, 00:19:50.735 "num_base_bdevs_operational": 1, 00:19:50.735 "base_bdevs_list": [ 00:19:50.735 { 00:19:50.735 "name": null, 00:19:50.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.735 "is_configured": false, 00:19:50.735 "data_offset": 0, 00:19:50.735 "data_size": 7936 00:19:50.735 }, 00:19:50.735 { 00:19:50.735 "name": "BaseBdev2", 00:19:50.735 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:50.735 "is_configured": true, 00:19:50.735 "data_offset": 256, 00:19:50.735 "data_size": 7936 00:19:50.735 } 00:19:50.735 ] 00:19:50.735 }' 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.735 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.302 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:51.302 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.302 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.302 [2024-11-26 19:09:17.772277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:51.302 [2024-11-26 19:09:17.772570] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:51.302 [2024-11-26 19:09:17.772600] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:51.302 [2024-11-26 19:09:17.772656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:51.302 [2024-11-26 19:09:17.785512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:51.302 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.302 19:09:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:51.302 [2024-11-26 19:09:17.788162] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:52.237 19:09:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.237 19:09:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.237 19:09:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:52.237 19:09:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:52.237 19:09:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.237 19:09:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.237 19:09:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.237 19:09:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.237 19:09:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.237 19:09:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.237 19:09:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.237 "name": "raid_bdev1", 00:19:52.237 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:52.237 "strip_size_kb": 0, 00:19:52.237 "state": "online", 00:19:52.237 "raid_level": "raid1", 00:19:52.237 "superblock": true, 00:19:52.237 "num_base_bdevs": 2, 00:19:52.237 "num_base_bdevs_discovered": 2, 00:19:52.237 "num_base_bdevs_operational": 2, 00:19:52.237 "process": { 00:19:52.237 "type": "rebuild", 00:19:52.237 "target": "spare", 00:19:52.237 "progress": { 00:19:52.237 "blocks": 2560, 00:19:52.237 "percent": 32 00:19:52.237 } 00:19:52.237 }, 00:19:52.237 "base_bdevs_list": [ 00:19:52.237 { 00:19:52.237 "name": "spare", 00:19:52.237 "uuid": "fd4fdd4f-9066-5279-832a-8f0c54a05645", 00:19:52.237 "is_configured": true, 00:19:52.237 "data_offset": 256, 00:19:52.237 "data_size": 7936 00:19:52.237 }, 00:19:52.237 { 00:19:52.238 "name": "BaseBdev2", 00:19:52.238 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:52.238 "is_configured": true, 00:19:52.238 "data_offset": 256, 00:19:52.238 "data_size": 7936 00:19:52.238 } 00:19:52.238 ] 00:19:52.238 }' 00:19:52.238 19:09:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.496 19:09:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.496 19:09:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.496 19:09:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.496 19:09:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:52.496 19:09:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.496 19:09:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.496 [2024-11-26 19:09:18.970887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:52.496 [2024-11-26 19:09:18.999669] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:52.496 [2024-11-26 19:09:18.999779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.496 [2024-11-26 19:09:18.999806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:52.496 [2024-11-26 19:09:18.999836] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:52.496 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.496 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:52.496 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:52.496 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:52.496 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.496 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.496 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:52.496 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.496 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.496 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.496 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.496 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.496 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.496 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.496 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.496 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.496 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.496 "name": "raid_bdev1", 00:19:52.497 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:52.497 "strip_size_kb": 0, 00:19:52.497 "state": "online", 00:19:52.497 "raid_level": "raid1", 00:19:52.497 "superblock": true, 00:19:52.497 "num_base_bdevs": 2, 00:19:52.497 "num_base_bdevs_discovered": 1, 00:19:52.497 "num_base_bdevs_operational": 1, 00:19:52.497 "base_bdevs_list": [ 00:19:52.497 { 00:19:52.497 "name": null, 00:19:52.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.497 "is_configured": false, 00:19:52.497 "data_offset": 0, 00:19:52.497 "data_size": 7936 00:19:52.497 }, 00:19:52.497 { 00:19:52.497 "name": "BaseBdev2", 00:19:52.497 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:52.497 "is_configured": true, 00:19:52.497 "data_offset": 256, 00:19:52.497 "data_size": 7936 00:19:52.497 } 00:19:52.497 ] 00:19:52.497 }' 00:19:52.497 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.497 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.068 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:53.068 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.068 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.068 [2024-11-26 19:09:19.551489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:53.068 [2024-11-26 19:09:19.551581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.068 [2024-11-26 19:09:19.551621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:53.068 [2024-11-26 19:09:19.551641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.068 [2024-11-26 19:09:19.552015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.068 [2024-11-26 19:09:19.552046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:53.068 [2024-11-26 19:09:19.552144] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:53.068 [2024-11-26 19:09:19.552178] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:53.068 [2024-11-26 19:09:19.552192] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:53.068 [2024-11-26 19:09:19.552231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:53.068 [2024-11-26 19:09:19.565296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:53.068 spare 00:19:53.068 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.068 19:09:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:53.068 [2024-11-26 19:09:19.567990] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:54.003 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.003 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.003 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:54.003 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:54.003 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.003 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.003 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.003 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.003 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.003 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.261 "name": "raid_bdev1", 00:19:54.261 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:54.261 "strip_size_kb": 0, 00:19:54.261 "state": "online", 00:19:54.261 "raid_level": "raid1", 00:19:54.261 "superblock": true, 00:19:54.261 "num_base_bdevs": 2, 00:19:54.261 "num_base_bdevs_discovered": 2, 00:19:54.261 "num_base_bdevs_operational": 2, 00:19:54.261 "process": { 00:19:54.261 "type": "rebuild", 00:19:54.261 "target": "spare", 00:19:54.261 "progress": { 00:19:54.261 "blocks": 2304, 00:19:54.261 "percent": 29 00:19:54.261 } 00:19:54.261 }, 00:19:54.261 "base_bdevs_list": [ 00:19:54.261 { 00:19:54.261 "name": "spare", 00:19:54.261 "uuid": "fd4fdd4f-9066-5279-832a-8f0c54a05645", 00:19:54.261 "is_configured": true, 00:19:54.261 "data_offset": 256, 00:19:54.261 "data_size": 7936 00:19:54.261 }, 00:19:54.261 { 00:19:54.261 "name": "BaseBdev2", 00:19:54.261 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:54.261 "is_configured": true, 00:19:54.261 "data_offset": 256, 00:19:54.261 "data_size": 7936 00:19:54.261 } 00:19:54.261 ] 00:19:54.261 }' 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.261 [2024-11-26 19:09:20.730135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:54.261 [2024-11-26 19:09:20.780235] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:54.261 [2024-11-26 19:09:20.780376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.261 [2024-11-26 19:09:20.780408] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:54.261 [2024-11-26 19:09:20.780421] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.261 "name": "raid_bdev1", 00:19:54.261 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:54.261 "strip_size_kb": 0, 00:19:54.261 "state": "online", 00:19:54.261 "raid_level": "raid1", 00:19:54.261 "superblock": true, 00:19:54.261 "num_base_bdevs": 2, 00:19:54.261 "num_base_bdevs_discovered": 1, 00:19:54.261 "num_base_bdevs_operational": 1, 00:19:54.261 "base_bdevs_list": [ 00:19:54.261 { 00:19:54.261 "name": null, 00:19:54.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.261 "is_configured": false, 00:19:54.261 "data_offset": 0, 00:19:54.261 "data_size": 7936 00:19:54.261 }, 00:19:54.261 { 00:19:54.261 "name": "BaseBdev2", 00:19:54.261 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:54.261 "is_configured": true, 00:19:54.261 "data_offset": 256, 00:19:54.261 "data_size": 7936 00:19:54.261 } 00:19:54.261 ] 00:19:54.261 }' 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.261 19:09:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.829 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:54.829 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.829 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:54.829 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:54.829 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.829 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.829 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.829 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.829 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.829 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.829 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.829 "name": "raid_bdev1", 00:19:54.829 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:54.829 "strip_size_kb": 0, 00:19:54.829 "state": "online", 00:19:54.829 "raid_level": "raid1", 00:19:54.829 "superblock": true, 00:19:54.829 "num_base_bdevs": 2, 00:19:54.829 "num_base_bdevs_discovered": 1, 00:19:54.829 "num_base_bdevs_operational": 1, 00:19:54.829 "base_bdevs_list": [ 00:19:54.829 { 00:19:54.829 "name": null, 00:19:54.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.829 "is_configured": false, 00:19:54.829 "data_offset": 0, 00:19:54.829 "data_size": 7936 00:19:54.829 }, 00:19:54.829 { 00:19:54.829 "name": "BaseBdev2", 00:19:54.829 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:54.829 "is_configured": true, 00:19:54.829 "data_offset": 256, 00:19:54.829 "data_size": 7936 00:19:54.829 } 00:19:54.829 ] 00:19:54.829 }' 00:19:54.829 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.829 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:54.829 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.182 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:55.182 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:55.182 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.182 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:55.182 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.182 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:55.182 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.182 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:55.182 [2024-11-26 19:09:21.496376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:55.182 [2024-11-26 19:09:21.496466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.182 [2024-11-26 19:09:21.496504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:55.182 [2024-11-26 19:09:21.496521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.182 [2024-11-26 19:09:21.496894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.182 [2024-11-26 19:09:21.496918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:55.182 [2024-11-26 19:09:21.496999] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:55.182 [2024-11-26 19:09:21.497022] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:55.182 [2024-11-26 19:09:21.497037] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:55.182 [2024-11-26 19:09:21.497052] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:55.182 BaseBdev1 00:19:55.182 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.182 19:09:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:56.116 19:09:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:56.116 19:09:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.116 19:09:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.116 19:09:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.116 19:09:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.116 19:09:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:56.116 19:09:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.116 19:09:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.116 19:09:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.116 19:09:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.116 19:09:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.116 19:09:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.116 19:09:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.116 19:09:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.116 19:09:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.116 19:09:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.116 "name": "raid_bdev1", 00:19:56.116 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:56.116 "strip_size_kb": 0, 00:19:56.116 "state": "online", 00:19:56.116 "raid_level": "raid1", 00:19:56.116 "superblock": true, 00:19:56.117 "num_base_bdevs": 2, 00:19:56.117 "num_base_bdevs_discovered": 1, 00:19:56.117 "num_base_bdevs_operational": 1, 00:19:56.117 "base_bdevs_list": [ 00:19:56.117 { 00:19:56.117 "name": null, 00:19:56.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.117 "is_configured": false, 00:19:56.117 "data_offset": 0, 00:19:56.117 "data_size": 7936 00:19:56.117 }, 00:19:56.117 { 00:19:56.117 "name": "BaseBdev2", 00:19:56.117 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:56.117 "is_configured": true, 00:19:56.117 "data_offset": 256, 00:19:56.117 "data_size": 7936 00:19:56.117 } 00:19:56.117 ] 00:19:56.117 }' 00:19:56.117 19:09:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.117 19:09:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.688 "name": "raid_bdev1", 00:19:56.688 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:56.688 "strip_size_kb": 0, 00:19:56.688 "state": "online", 00:19:56.688 "raid_level": "raid1", 00:19:56.688 "superblock": true, 00:19:56.688 "num_base_bdevs": 2, 00:19:56.688 "num_base_bdevs_discovered": 1, 00:19:56.688 "num_base_bdevs_operational": 1, 00:19:56.688 "base_bdevs_list": [ 00:19:56.688 { 00:19:56.688 "name": null, 00:19:56.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.688 "is_configured": false, 00:19:56.688 "data_offset": 0, 00:19:56.688 "data_size": 7936 00:19:56.688 }, 00:19:56.688 { 00:19:56.688 "name": "BaseBdev2", 00:19:56.688 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:56.688 "is_configured": true, 00:19:56.688 "data_offset": 256, 00:19:56.688 "data_size": 7936 00:19:56.688 } 00:19:56.688 ] 00:19:56.688 }' 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.688 [2024-11-26 19:09:23.208900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:56.688 [2024-11-26 19:09:23.209152] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:56.688 [2024-11-26 19:09:23.209179] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:56.688 request: 00:19:56.688 { 00:19:56.688 "base_bdev": "BaseBdev1", 00:19:56.688 "raid_bdev": "raid_bdev1", 00:19:56.688 "method": "bdev_raid_add_base_bdev", 00:19:56.688 "req_id": 1 00:19:56.688 } 00:19:56.688 Got JSON-RPC error response 00:19:56.688 response: 00:19:56.688 { 00:19:56.688 "code": -22, 00:19:56.688 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:56.688 } 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:56.688 19:09:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:57.624 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:57.624 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.624 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.624 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.624 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.624 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:57.624 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.624 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.624 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.624 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.624 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.624 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.624 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.624 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.624 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.882 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.882 "name": "raid_bdev1", 00:19:57.882 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:57.882 "strip_size_kb": 0, 00:19:57.882 "state": "online", 00:19:57.882 "raid_level": "raid1", 00:19:57.882 "superblock": true, 00:19:57.882 "num_base_bdevs": 2, 00:19:57.882 "num_base_bdevs_discovered": 1, 00:19:57.882 "num_base_bdevs_operational": 1, 00:19:57.882 "base_bdevs_list": [ 00:19:57.882 { 00:19:57.882 "name": null, 00:19:57.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.882 "is_configured": false, 00:19:57.882 "data_offset": 0, 00:19:57.882 "data_size": 7936 00:19:57.882 }, 00:19:57.882 { 00:19:57.882 "name": "BaseBdev2", 00:19:57.882 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:57.882 "is_configured": true, 00:19:57.882 "data_offset": 256, 00:19:57.882 "data_size": 7936 00:19:57.882 } 00:19:57.882 ] 00:19:57.882 }' 00:19:57.882 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.882 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.141 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:58.141 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.141 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:58.141 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:58.141 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.141 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.141 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.141 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.141 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.141 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.141 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.141 "name": "raid_bdev1", 00:19:58.141 "uuid": "2fa72a4e-9092-4c51-b8bc-5d29698f39b8", 00:19:58.141 "strip_size_kb": 0, 00:19:58.141 "state": "online", 00:19:58.141 "raid_level": "raid1", 00:19:58.141 "superblock": true, 00:19:58.141 "num_base_bdevs": 2, 00:19:58.141 "num_base_bdevs_discovered": 1, 00:19:58.141 "num_base_bdevs_operational": 1, 00:19:58.141 "base_bdevs_list": [ 00:19:58.141 { 00:19:58.141 "name": null, 00:19:58.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.141 "is_configured": false, 00:19:58.141 "data_offset": 0, 00:19:58.141 "data_size": 7936 00:19:58.141 }, 00:19:58.141 { 00:19:58.141 "name": "BaseBdev2", 00:19:58.141 "uuid": "6718b918-961e-5ecd-b36d-7cac3efb8c40", 00:19:58.141 "is_configured": true, 00:19:58.141 "data_offset": 256, 00:19:58.141 "data_size": 7936 00:19:58.141 } 00:19:58.141 ] 00:19:58.141 }' 00:19:58.141 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.404 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:58.404 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.404 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:58.404 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88748 00:19:58.404 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88748 ']' 00:19:58.404 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88748 00:19:58.404 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:58.404 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:58.404 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88748 00:19:58.404 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:58.404 killing process with pid 88748 00:19:58.404 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:58.404 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88748' 00:19:58.404 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88748 00:19:58.404 Received shutdown signal, test time was about 60.000000 seconds 00:19:58.404 00:19:58.404 Latency(us) 00:19:58.404 [2024-11-26T19:09:25.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.404 [2024-11-26T19:09:25.027Z] =================================================================================================================== 00:19:58.404 [2024-11-26T19:09:25.027Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:58.404 [2024-11-26 19:09:24.907929] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:58.404 19:09:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88748 00:19:58.404 [2024-11-26 19:09:24.908105] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:58.404 [2024-11-26 19:09:24.908178] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:58.404 [2024-11-26 19:09:24.908207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:58.669 [2024-11-26 19:09:25.222074] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:00.042 19:09:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:20:00.042 00:20:00.042 real 0m21.576s 00:20:00.042 user 0m29.147s 00:20:00.042 sys 0m2.553s 00:20:00.042 19:09:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.042 19:09:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.042 ************************************ 00:20:00.042 END TEST raid_rebuild_test_sb_md_separate 00:20:00.042 ************************************ 00:20:00.042 19:09:26 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:20:00.042 19:09:26 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:20:00.042 19:09:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:00.042 19:09:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:00.042 19:09:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:00.042 ************************************ 00:20:00.042 START TEST raid_state_function_test_sb_md_interleaved 00:20:00.042 ************************************ 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89451 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:00.042 Process raid pid: 89451 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89451' 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89451 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89451 ']' 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.042 19:09:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.042 [2024-11-26 19:09:26.523185] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:20:00.042 [2024-11-26 19:09:26.523379] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.300 [2024-11-26 19:09:26.704599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.300 [2024-11-26 19:09:26.885479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.558 [2024-11-26 19:09:27.113382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:00.558 [2024-11-26 19:09:27.113447] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:01.128 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.128 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:01.128 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:01.128 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.128 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.128 [2024-11-26 19:09:27.540022] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:01.128 [2024-11-26 19:09:27.540099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:01.128 [2024-11-26 19:09:27.540117] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:01.128 [2024-11-26 19:09:27.540133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:01.128 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.128 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:01.128 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:01.128 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:01.128 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:01.128 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:01.128 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:01.128 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.128 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.128 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.128 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.128 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:01.128 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.128 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.128 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.128 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.128 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.128 "name": "Existed_Raid", 00:20:01.128 "uuid": "4789e724-f702-4c51-8382-a6bd8b7654fe", 00:20:01.129 "strip_size_kb": 0, 00:20:01.129 "state": "configuring", 00:20:01.129 "raid_level": "raid1", 00:20:01.129 "superblock": true, 00:20:01.129 "num_base_bdevs": 2, 00:20:01.129 "num_base_bdevs_discovered": 0, 00:20:01.129 "num_base_bdevs_operational": 2, 00:20:01.129 "base_bdevs_list": [ 00:20:01.129 { 00:20:01.129 "name": "BaseBdev1", 00:20:01.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.129 "is_configured": false, 00:20:01.129 "data_offset": 0, 00:20:01.129 "data_size": 0 00:20:01.129 }, 00:20:01.129 { 00:20:01.129 "name": "BaseBdev2", 00:20:01.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.129 "is_configured": false, 00:20:01.129 "data_offset": 0, 00:20:01.129 "data_size": 0 00:20:01.129 } 00:20:01.129 ] 00:20:01.129 }' 00:20:01.129 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.129 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.396 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:01.396 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.396 19:09:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.396 [2024-11-26 19:09:28.004080] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:01.396 [2024-11-26 19:09:28.004128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:01.396 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.396 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:01.396 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.396 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.396 [2024-11-26 19:09:28.016109] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:01.396 [2024-11-26 19:09:28.016182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:01.396 [2024-11-26 19:09:28.016199] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:01.396 [2024-11-26 19:09:28.016218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.654 [2024-11-26 19:09:28.064655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:01.654 BaseBdev1 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.654 [ 00:20:01.654 { 00:20:01.654 "name": "BaseBdev1", 00:20:01.654 "aliases": [ 00:20:01.654 "8a35a475-016f-4de0-96f5-703e5ca752f2" 00:20:01.654 ], 00:20:01.654 "product_name": "Malloc disk", 00:20:01.654 "block_size": 4128, 00:20:01.654 "num_blocks": 8192, 00:20:01.654 "uuid": "8a35a475-016f-4de0-96f5-703e5ca752f2", 00:20:01.654 "md_size": 32, 00:20:01.654 "md_interleave": true, 00:20:01.654 "dif_type": 0, 00:20:01.654 "assigned_rate_limits": { 00:20:01.654 "rw_ios_per_sec": 0, 00:20:01.654 "rw_mbytes_per_sec": 0, 00:20:01.654 "r_mbytes_per_sec": 0, 00:20:01.654 "w_mbytes_per_sec": 0 00:20:01.654 }, 00:20:01.654 "claimed": true, 00:20:01.654 "claim_type": "exclusive_write", 00:20:01.654 "zoned": false, 00:20:01.654 "supported_io_types": { 00:20:01.654 "read": true, 00:20:01.654 "write": true, 00:20:01.654 "unmap": true, 00:20:01.654 "flush": true, 00:20:01.654 "reset": true, 00:20:01.654 "nvme_admin": false, 00:20:01.654 "nvme_io": false, 00:20:01.654 "nvme_io_md": false, 00:20:01.654 "write_zeroes": true, 00:20:01.654 "zcopy": true, 00:20:01.654 "get_zone_info": false, 00:20:01.654 "zone_management": false, 00:20:01.654 "zone_append": false, 00:20:01.654 "compare": false, 00:20:01.654 "compare_and_write": false, 00:20:01.654 "abort": true, 00:20:01.654 "seek_hole": false, 00:20:01.654 "seek_data": false, 00:20:01.654 "copy": true, 00:20:01.654 "nvme_iov_md": false 00:20:01.654 }, 00:20:01.654 "memory_domains": [ 00:20:01.654 { 00:20:01.654 "dma_device_id": "system", 00:20:01.654 "dma_device_type": 1 00:20:01.654 }, 00:20:01.654 { 00:20:01.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.654 "dma_device_type": 2 00:20:01.654 } 00:20:01.654 ], 00:20:01.654 "driver_specific": {} 00:20:01.654 } 00:20:01.654 ] 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.654 "name": "Existed_Raid", 00:20:01.654 "uuid": "8e053699-f033-446e-8fe3-872817e8ae2f", 00:20:01.654 "strip_size_kb": 0, 00:20:01.654 "state": "configuring", 00:20:01.654 "raid_level": "raid1", 00:20:01.654 "superblock": true, 00:20:01.654 "num_base_bdevs": 2, 00:20:01.654 "num_base_bdevs_discovered": 1, 00:20:01.654 "num_base_bdevs_operational": 2, 00:20:01.654 "base_bdevs_list": [ 00:20:01.654 { 00:20:01.654 "name": "BaseBdev1", 00:20:01.654 "uuid": "8a35a475-016f-4de0-96f5-703e5ca752f2", 00:20:01.654 "is_configured": true, 00:20:01.654 "data_offset": 256, 00:20:01.654 "data_size": 7936 00:20:01.654 }, 00:20:01.654 { 00:20:01.654 "name": "BaseBdev2", 00:20:01.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.654 "is_configured": false, 00:20:01.654 "data_offset": 0, 00:20:01.654 "data_size": 0 00:20:01.654 } 00:20:01.654 ] 00:20:01.654 }' 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.654 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.221 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:02.221 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.221 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.221 [2024-11-26 19:09:28.580910] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:02.221 [2024-11-26 19:09:28.580994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:02.221 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.221 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:02.221 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.221 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.221 [2024-11-26 19:09:28.593003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:02.221 [2024-11-26 19:09:28.595705] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:02.221 [2024-11-26 19:09:28.595885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:02.221 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.221 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:02.221 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:02.222 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:02.222 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:02.222 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:02.222 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.222 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.222 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:02.222 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.222 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.222 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.222 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.222 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.222 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.222 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.222 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.222 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.222 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.222 "name": "Existed_Raid", 00:20:02.222 "uuid": "bb2bdea8-1a28-427a-a3cf-02195c126250", 00:20:02.222 "strip_size_kb": 0, 00:20:02.222 "state": "configuring", 00:20:02.222 "raid_level": "raid1", 00:20:02.222 "superblock": true, 00:20:02.222 "num_base_bdevs": 2, 00:20:02.222 "num_base_bdevs_discovered": 1, 00:20:02.222 "num_base_bdevs_operational": 2, 00:20:02.222 "base_bdevs_list": [ 00:20:02.222 { 00:20:02.222 "name": "BaseBdev1", 00:20:02.222 "uuid": "8a35a475-016f-4de0-96f5-703e5ca752f2", 00:20:02.222 "is_configured": true, 00:20:02.222 "data_offset": 256, 00:20:02.222 "data_size": 7936 00:20:02.222 }, 00:20:02.222 { 00:20:02.222 "name": "BaseBdev2", 00:20:02.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.222 "is_configured": false, 00:20:02.222 "data_offset": 0, 00:20:02.222 "data_size": 0 00:20:02.222 } 00:20:02.222 ] 00:20:02.222 }' 00:20:02.222 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.222 19:09:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.787 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:20:02.787 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.787 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.787 [2024-11-26 19:09:29.175278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:02.787 [2024-11-26 19:09:29.175616] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:02.787 [2024-11-26 19:09:29.175637] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:02.787 [2024-11-26 19:09:29.175741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:02.787 [2024-11-26 19:09:29.175849] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:02.787 [2024-11-26 19:09:29.175869] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:02.787 [2024-11-26 19:09:29.175960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.787 BaseBdev2 00:20:02.787 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.787 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:02.787 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:02.787 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:02.787 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:20:02.787 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:02.787 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:02.787 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:02.787 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.787 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.787 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.787 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:02.787 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.787 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.787 [ 00:20:02.787 { 00:20:02.787 "name": "BaseBdev2", 00:20:02.787 "aliases": [ 00:20:02.787 "56724983-25d9-4b60-97b4-d2c3a7660833" 00:20:02.787 ], 00:20:02.787 "product_name": "Malloc disk", 00:20:02.787 "block_size": 4128, 00:20:02.787 "num_blocks": 8192, 00:20:02.787 "uuid": "56724983-25d9-4b60-97b4-d2c3a7660833", 00:20:02.787 "md_size": 32, 00:20:02.787 "md_interleave": true, 00:20:02.787 "dif_type": 0, 00:20:02.787 "assigned_rate_limits": { 00:20:02.787 "rw_ios_per_sec": 0, 00:20:02.787 "rw_mbytes_per_sec": 0, 00:20:02.787 "r_mbytes_per_sec": 0, 00:20:02.787 "w_mbytes_per_sec": 0 00:20:02.787 }, 00:20:02.787 "claimed": true, 00:20:02.787 "claim_type": "exclusive_write", 00:20:02.787 "zoned": false, 00:20:02.787 "supported_io_types": { 00:20:02.787 "read": true, 00:20:02.787 "write": true, 00:20:02.787 "unmap": true, 00:20:02.787 "flush": true, 00:20:02.787 "reset": true, 00:20:02.787 "nvme_admin": false, 00:20:02.788 "nvme_io": false, 00:20:02.788 "nvme_io_md": false, 00:20:02.788 "write_zeroes": true, 00:20:02.788 "zcopy": true, 00:20:02.788 "get_zone_info": false, 00:20:02.788 "zone_management": false, 00:20:02.788 "zone_append": false, 00:20:02.788 "compare": false, 00:20:02.788 "compare_and_write": false, 00:20:02.788 "abort": true, 00:20:02.788 "seek_hole": false, 00:20:02.788 "seek_data": false, 00:20:02.788 "copy": true, 00:20:02.788 "nvme_iov_md": false 00:20:02.788 }, 00:20:02.788 "memory_domains": [ 00:20:02.788 { 00:20:02.788 "dma_device_id": "system", 00:20:02.788 "dma_device_type": 1 00:20:02.788 }, 00:20:02.788 { 00:20:02.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.788 "dma_device_type": 2 00:20:02.788 } 00:20:02.788 ], 00:20:02.788 "driver_specific": {} 00:20:02.788 } 00:20:02.788 ] 00:20:02.788 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.788 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:20:02.788 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:02.788 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:02.788 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:02.788 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:02.788 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.788 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.788 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.788 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:02.788 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.788 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.788 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.788 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.788 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.788 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.788 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.788 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.788 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.788 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.788 "name": "Existed_Raid", 00:20:02.788 "uuid": "bb2bdea8-1a28-427a-a3cf-02195c126250", 00:20:02.788 "strip_size_kb": 0, 00:20:02.788 "state": "online", 00:20:02.788 "raid_level": "raid1", 00:20:02.788 "superblock": true, 00:20:02.788 "num_base_bdevs": 2, 00:20:02.788 "num_base_bdevs_discovered": 2, 00:20:02.788 "num_base_bdevs_operational": 2, 00:20:02.788 "base_bdevs_list": [ 00:20:02.788 { 00:20:02.788 "name": "BaseBdev1", 00:20:02.788 "uuid": "8a35a475-016f-4de0-96f5-703e5ca752f2", 00:20:02.788 "is_configured": true, 00:20:02.788 "data_offset": 256, 00:20:02.788 "data_size": 7936 00:20:02.788 }, 00:20:02.788 { 00:20:02.788 "name": "BaseBdev2", 00:20:02.788 "uuid": "56724983-25d9-4b60-97b4-d2c3a7660833", 00:20:02.788 "is_configured": true, 00:20:02.788 "data_offset": 256, 00:20:02.788 "data_size": 7936 00:20:02.788 } 00:20:02.788 ] 00:20:02.788 }' 00:20:02.788 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.788 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.353 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:03.353 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:03.353 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:03.353 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:03.353 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:03.353 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:03.353 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:03.353 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:03.353 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.353 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.353 [2024-11-26 19:09:29.735905] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:03.353 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.353 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:03.353 "name": "Existed_Raid", 00:20:03.353 "aliases": [ 00:20:03.353 "bb2bdea8-1a28-427a-a3cf-02195c126250" 00:20:03.353 ], 00:20:03.353 "product_name": "Raid Volume", 00:20:03.353 "block_size": 4128, 00:20:03.353 "num_blocks": 7936, 00:20:03.353 "uuid": "bb2bdea8-1a28-427a-a3cf-02195c126250", 00:20:03.353 "md_size": 32, 00:20:03.353 "md_interleave": true, 00:20:03.353 "dif_type": 0, 00:20:03.353 "assigned_rate_limits": { 00:20:03.353 "rw_ios_per_sec": 0, 00:20:03.353 "rw_mbytes_per_sec": 0, 00:20:03.353 "r_mbytes_per_sec": 0, 00:20:03.353 "w_mbytes_per_sec": 0 00:20:03.353 }, 00:20:03.354 "claimed": false, 00:20:03.354 "zoned": false, 00:20:03.354 "supported_io_types": { 00:20:03.354 "read": true, 00:20:03.354 "write": true, 00:20:03.354 "unmap": false, 00:20:03.354 "flush": false, 00:20:03.354 "reset": true, 00:20:03.354 "nvme_admin": false, 00:20:03.354 "nvme_io": false, 00:20:03.354 "nvme_io_md": false, 00:20:03.354 "write_zeroes": true, 00:20:03.354 "zcopy": false, 00:20:03.354 "get_zone_info": false, 00:20:03.354 "zone_management": false, 00:20:03.354 "zone_append": false, 00:20:03.354 "compare": false, 00:20:03.354 "compare_and_write": false, 00:20:03.354 "abort": false, 00:20:03.354 "seek_hole": false, 00:20:03.354 "seek_data": false, 00:20:03.354 "copy": false, 00:20:03.354 "nvme_iov_md": false 00:20:03.354 }, 00:20:03.354 "memory_domains": [ 00:20:03.354 { 00:20:03.354 "dma_device_id": "system", 00:20:03.354 "dma_device_type": 1 00:20:03.354 }, 00:20:03.354 { 00:20:03.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.354 "dma_device_type": 2 00:20:03.354 }, 00:20:03.354 { 00:20:03.354 "dma_device_id": "system", 00:20:03.354 "dma_device_type": 1 00:20:03.354 }, 00:20:03.354 { 00:20:03.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.354 "dma_device_type": 2 00:20:03.354 } 00:20:03.354 ], 00:20:03.354 "driver_specific": { 00:20:03.354 "raid": { 00:20:03.354 "uuid": "bb2bdea8-1a28-427a-a3cf-02195c126250", 00:20:03.354 "strip_size_kb": 0, 00:20:03.354 "state": "online", 00:20:03.354 "raid_level": "raid1", 00:20:03.354 "superblock": true, 00:20:03.354 "num_base_bdevs": 2, 00:20:03.354 "num_base_bdevs_discovered": 2, 00:20:03.354 "num_base_bdevs_operational": 2, 00:20:03.354 "base_bdevs_list": [ 00:20:03.354 { 00:20:03.354 "name": "BaseBdev1", 00:20:03.354 "uuid": "8a35a475-016f-4de0-96f5-703e5ca752f2", 00:20:03.354 "is_configured": true, 00:20:03.354 "data_offset": 256, 00:20:03.354 "data_size": 7936 00:20:03.354 }, 00:20:03.354 { 00:20:03.354 "name": "BaseBdev2", 00:20:03.354 "uuid": "56724983-25d9-4b60-97b4-d2c3a7660833", 00:20:03.354 "is_configured": true, 00:20:03.354 "data_offset": 256, 00:20:03.354 "data_size": 7936 00:20:03.354 } 00:20:03.354 ] 00:20:03.354 } 00:20:03.354 } 00:20:03.354 }' 00:20:03.354 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:03.354 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:03.354 BaseBdev2' 00:20:03.354 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:03.354 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:03.354 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:03.354 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:03.354 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.354 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.354 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:03.354 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.354 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:03.354 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:03.354 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:03.354 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:03.354 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:03.354 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.354 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.354 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.612 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:03.612 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:03.612 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:03.612 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.612 19:09:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.612 [2024-11-26 19:09:29.999717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.612 "name": "Existed_Raid", 00:20:03.612 "uuid": "bb2bdea8-1a28-427a-a3cf-02195c126250", 00:20:03.612 "strip_size_kb": 0, 00:20:03.612 "state": "online", 00:20:03.612 "raid_level": "raid1", 00:20:03.612 "superblock": true, 00:20:03.612 "num_base_bdevs": 2, 00:20:03.612 "num_base_bdevs_discovered": 1, 00:20:03.612 "num_base_bdevs_operational": 1, 00:20:03.612 "base_bdevs_list": [ 00:20:03.612 { 00:20:03.612 "name": null, 00:20:03.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.612 "is_configured": false, 00:20:03.612 "data_offset": 0, 00:20:03.612 "data_size": 7936 00:20:03.612 }, 00:20:03.612 { 00:20:03.612 "name": "BaseBdev2", 00:20:03.612 "uuid": "56724983-25d9-4b60-97b4-d2c3a7660833", 00:20:03.612 "is_configured": true, 00:20:03.612 "data_offset": 256, 00:20:03.612 "data_size": 7936 00:20:03.612 } 00:20:03.612 ] 00:20:03.612 }' 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.612 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.178 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:04.178 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:04.178 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.178 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:04.178 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.178 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.178 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.178 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:04.178 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:04.178 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:04.178 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.178 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.178 [2024-11-26 19:09:30.669810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:04.178 [2024-11-26 19:09:30.669986] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:04.178 [2024-11-26 19:09:30.765027] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:04.178 [2024-11-26 19:09:30.765341] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:04.178 [2024-11-26 19:09:30.765502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:04.178 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.178 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:04.178 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:04.178 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.178 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:04.178 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.178 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.178 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.436 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:04.436 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:04.436 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:04.436 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89451 00:20:04.436 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89451 ']' 00:20:04.436 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89451 00:20:04.436 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:04.436 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.436 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89451 00:20:04.436 killing process with pid 89451 00:20:04.436 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:04.436 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:04.436 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89451' 00:20:04.436 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89451 00:20:04.436 [2024-11-26 19:09:30.861751] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:04.436 19:09:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89451 00:20:04.436 [2024-11-26 19:09:30.877150] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:05.811 19:09:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:20:05.811 00:20:05.811 real 0m5.624s 00:20:05.811 user 0m8.299s 00:20:05.811 sys 0m0.911s 00:20:05.811 ************************************ 00:20:05.811 END TEST raid_state_function_test_sb_md_interleaved 00:20:05.811 ************************************ 00:20:05.811 19:09:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:05.811 19:09:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.811 19:09:32 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:20:05.811 19:09:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:05.811 19:09:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:05.811 19:09:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:05.811 ************************************ 00:20:05.811 START TEST raid_superblock_test_md_interleaved 00:20:05.811 ************************************ 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:05.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89703 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89703 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89703 ']' 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.811 19:09:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.811 [2024-11-26 19:09:32.219661] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:20:05.811 [2024-11-26 19:09:32.220123] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89703 ] 00:20:05.811 [2024-11-26 19:09:32.407700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.069 [2024-11-26 19:09:32.557202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.329 [2024-11-26 19:09:32.784132] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:06.329 [2024-11-26 19:09:32.784212] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.895 malloc1 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.895 [2024-11-26 19:09:33.344941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:06.895 [2024-11-26 19:09:33.345017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.895 [2024-11-26 19:09:33.345053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:06.895 [2024-11-26 19:09:33.345070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.895 [2024-11-26 19:09:33.347760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.895 [2024-11-26 19:09:33.347805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:06.895 pt1 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.895 malloc2 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.895 [2024-11-26 19:09:33.404467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:06.895 [2024-11-26 19:09:33.404692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.895 [2024-11-26 19:09:33.404775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:06.895 [2024-11-26 19:09:33.404952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.895 [2024-11-26 19:09:33.407665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.895 [2024-11-26 19:09:33.407825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:06.895 pt2 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.895 [2024-11-26 19:09:33.416556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:06.895 [2024-11-26 19:09:33.419153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:06.895 [2024-11-26 19:09:33.419573] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:06.895 [2024-11-26 19:09:33.419601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:06.895 [2024-11-26 19:09:33.419720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:06.895 [2024-11-26 19:09:33.419842] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:06.895 [2024-11-26 19:09:33.419862] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:06.895 [2024-11-26 19:09:33.419963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.895 "name": "raid_bdev1", 00:20:06.895 "uuid": "cd049a5d-d5ef-4ec0-bc1c-1116a0d90632", 00:20:06.895 "strip_size_kb": 0, 00:20:06.895 "state": "online", 00:20:06.895 "raid_level": "raid1", 00:20:06.895 "superblock": true, 00:20:06.895 "num_base_bdevs": 2, 00:20:06.895 "num_base_bdevs_discovered": 2, 00:20:06.895 "num_base_bdevs_operational": 2, 00:20:06.895 "base_bdevs_list": [ 00:20:06.895 { 00:20:06.895 "name": "pt1", 00:20:06.895 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:06.895 "is_configured": true, 00:20:06.895 "data_offset": 256, 00:20:06.895 "data_size": 7936 00:20:06.895 }, 00:20:06.895 { 00:20:06.895 "name": "pt2", 00:20:06.895 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:06.895 "is_configured": true, 00:20:06.895 "data_offset": 256, 00:20:06.895 "data_size": 7936 00:20:06.895 } 00:20:06.895 ] 00:20:06.895 }' 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.895 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.463 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:07.463 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:07.463 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:07.463 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:07.463 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:07.463 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:07.463 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:07.463 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.463 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:07.463 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.463 [2024-11-26 19:09:33.913087] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:07.463 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.463 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:07.463 "name": "raid_bdev1", 00:20:07.463 "aliases": [ 00:20:07.463 "cd049a5d-d5ef-4ec0-bc1c-1116a0d90632" 00:20:07.463 ], 00:20:07.463 "product_name": "Raid Volume", 00:20:07.463 "block_size": 4128, 00:20:07.463 "num_blocks": 7936, 00:20:07.463 "uuid": "cd049a5d-d5ef-4ec0-bc1c-1116a0d90632", 00:20:07.463 "md_size": 32, 00:20:07.463 "md_interleave": true, 00:20:07.463 "dif_type": 0, 00:20:07.463 "assigned_rate_limits": { 00:20:07.463 "rw_ios_per_sec": 0, 00:20:07.463 "rw_mbytes_per_sec": 0, 00:20:07.463 "r_mbytes_per_sec": 0, 00:20:07.463 "w_mbytes_per_sec": 0 00:20:07.463 }, 00:20:07.463 "claimed": false, 00:20:07.463 "zoned": false, 00:20:07.463 "supported_io_types": { 00:20:07.463 "read": true, 00:20:07.463 "write": true, 00:20:07.463 "unmap": false, 00:20:07.463 "flush": false, 00:20:07.463 "reset": true, 00:20:07.463 "nvme_admin": false, 00:20:07.463 "nvme_io": false, 00:20:07.463 "nvme_io_md": false, 00:20:07.463 "write_zeroes": true, 00:20:07.463 "zcopy": false, 00:20:07.463 "get_zone_info": false, 00:20:07.463 "zone_management": false, 00:20:07.463 "zone_append": false, 00:20:07.463 "compare": false, 00:20:07.463 "compare_and_write": false, 00:20:07.463 "abort": false, 00:20:07.463 "seek_hole": false, 00:20:07.463 "seek_data": false, 00:20:07.463 "copy": false, 00:20:07.463 "nvme_iov_md": false 00:20:07.463 }, 00:20:07.463 "memory_domains": [ 00:20:07.463 { 00:20:07.463 "dma_device_id": "system", 00:20:07.463 "dma_device_type": 1 00:20:07.463 }, 00:20:07.463 { 00:20:07.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.463 "dma_device_type": 2 00:20:07.463 }, 00:20:07.463 { 00:20:07.463 "dma_device_id": "system", 00:20:07.463 "dma_device_type": 1 00:20:07.463 }, 00:20:07.463 { 00:20:07.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.463 "dma_device_type": 2 00:20:07.463 } 00:20:07.463 ], 00:20:07.463 "driver_specific": { 00:20:07.463 "raid": { 00:20:07.463 "uuid": "cd049a5d-d5ef-4ec0-bc1c-1116a0d90632", 00:20:07.463 "strip_size_kb": 0, 00:20:07.463 "state": "online", 00:20:07.463 "raid_level": "raid1", 00:20:07.463 "superblock": true, 00:20:07.463 "num_base_bdevs": 2, 00:20:07.463 "num_base_bdevs_discovered": 2, 00:20:07.463 "num_base_bdevs_operational": 2, 00:20:07.463 "base_bdevs_list": [ 00:20:07.463 { 00:20:07.463 "name": "pt1", 00:20:07.463 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:07.463 "is_configured": true, 00:20:07.463 "data_offset": 256, 00:20:07.463 "data_size": 7936 00:20:07.463 }, 00:20:07.463 { 00:20:07.463 "name": "pt2", 00:20:07.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:07.463 "is_configured": true, 00:20:07.463 "data_offset": 256, 00:20:07.463 "data_size": 7936 00:20:07.463 } 00:20:07.463 ] 00:20:07.463 } 00:20:07.463 } 00:20:07.463 }' 00:20:07.463 19:09:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:07.463 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:07.463 pt2' 00:20:07.463 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:07.463 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:07.463 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:07.463 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:07.463 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.463 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:07.463 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.723 [2024-11-26 19:09:34.177133] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cd049a5d-d5ef-4ec0-bc1c-1116a0d90632 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z cd049a5d-d5ef-4ec0-bc1c-1116a0d90632 ']' 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.723 [2024-11-26 19:09:34.220750] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:07.723 [2024-11-26 19:09:34.220786] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:07.723 [2024-11-26 19:09:34.220935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:07.723 [2024-11-26 19:09:34.221023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:07.723 [2024-11-26 19:09:34.221044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.723 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.981 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.982 [2024-11-26 19:09:34.368839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:07.982 [2024-11-26 19:09:34.371633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:07.982 [2024-11-26 19:09:34.371745] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:07.982 [2024-11-26 19:09:34.371833] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:07.982 [2024-11-26 19:09:34.371860] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:07.982 [2024-11-26 19:09:34.371876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:07.982 request: 00:20:07.982 { 00:20:07.982 "name": "raid_bdev1", 00:20:07.982 "raid_level": "raid1", 00:20:07.982 "base_bdevs": [ 00:20:07.982 "malloc1", 00:20:07.982 "malloc2" 00:20:07.982 ], 00:20:07.982 "superblock": false, 00:20:07.982 "method": "bdev_raid_create", 00:20:07.982 "req_id": 1 00:20:07.982 } 00:20:07.982 Got JSON-RPC error response 00:20:07.982 response: 00:20:07.982 { 00:20:07.982 "code": -17, 00:20:07.982 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:07.982 } 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.982 [2024-11-26 19:09:34.428775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:07.982 [2024-11-26 19:09:34.429018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.982 [2024-11-26 19:09:34.429182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:07.982 [2024-11-26 19:09:34.429315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.982 [2024-11-26 19:09:34.432139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.982 [2024-11-26 19:09:34.432313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:07.982 [2024-11-26 19:09:34.432519] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:07.982 [2024-11-26 19:09:34.432709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:07.982 pt1 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.982 "name": "raid_bdev1", 00:20:07.982 "uuid": "cd049a5d-d5ef-4ec0-bc1c-1116a0d90632", 00:20:07.982 "strip_size_kb": 0, 00:20:07.982 "state": "configuring", 00:20:07.982 "raid_level": "raid1", 00:20:07.982 "superblock": true, 00:20:07.982 "num_base_bdevs": 2, 00:20:07.982 "num_base_bdevs_discovered": 1, 00:20:07.982 "num_base_bdevs_operational": 2, 00:20:07.982 "base_bdevs_list": [ 00:20:07.982 { 00:20:07.982 "name": "pt1", 00:20:07.982 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:07.982 "is_configured": true, 00:20:07.982 "data_offset": 256, 00:20:07.982 "data_size": 7936 00:20:07.982 }, 00:20:07.982 { 00:20:07.982 "name": null, 00:20:07.982 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:07.982 "is_configured": false, 00:20:07.982 "data_offset": 256, 00:20:07.982 "data_size": 7936 00:20:07.982 } 00:20:07.982 ] 00:20:07.982 }' 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.982 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.567 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:08.567 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:08.567 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:08.567 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:08.567 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.567 19:09:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.567 [2024-11-26 19:09:35.001212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:08.567 [2024-11-26 19:09:35.001331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.567 [2024-11-26 19:09:35.001369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:08.567 [2024-11-26 19:09:35.001388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.567 [2024-11-26 19:09:35.001651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.567 [2024-11-26 19:09:35.001689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:08.567 [2024-11-26 19:09:35.001769] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:08.567 [2024-11-26 19:09:35.001809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:08.567 [2024-11-26 19:09:35.001951] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:08.567 [2024-11-26 19:09:35.002160] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:08.567 [2024-11-26 19:09:35.002277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:08.567 [2024-11-26 19:09:35.002400] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:08.567 [2024-11-26 19:09:35.002415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:08.567 [2024-11-26 19:09:35.002513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.567 pt2 00:20:08.567 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.567 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:08.567 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:08.567 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:08.567 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.567 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.567 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:08.567 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:08.567 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:08.567 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.567 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.567 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.567 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.567 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.567 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.567 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.567 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.567 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.567 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.567 "name": "raid_bdev1", 00:20:08.567 "uuid": "cd049a5d-d5ef-4ec0-bc1c-1116a0d90632", 00:20:08.567 "strip_size_kb": 0, 00:20:08.567 "state": "online", 00:20:08.567 "raid_level": "raid1", 00:20:08.567 "superblock": true, 00:20:08.567 "num_base_bdevs": 2, 00:20:08.567 "num_base_bdevs_discovered": 2, 00:20:08.567 "num_base_bdevs_operational": 2, 00:20:08.567 "base_bdevs_list": [ 00:20:08.567 { 00:20:08.567 "name": "pt1", 00:20:08.567 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:08.567 "is_configured": true, 00:20:08.567 "data_offset": 256, 00:20:08.567 "data_size": 7936 00:20:08.567 }, 00:20:08.567 { 00:20:08.567 "name": "pt2", 00:20:08.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:08.568 "is_configured": true, 00:20:08.568 "data_offset": 256, 00:20:08.568 "data_size": 7936 00:20:08.568 } 00:20:08.568 ] 00:20:08.568 }' 00:20:08.568 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.568 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.157 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:09.157 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:09.157 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:09.157 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:09.157 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:09.157 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:09.157 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:09.157 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:09.157 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.157 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.157 [2024-11-26 19:09:35.529678] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:09.157 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.157 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:09.157 "name": "raid_bdev1", 00:20:09.157 "aliases": [ 00:20:09.157 "cd049a5d-d5ef-4ec0-bc1c-1116a0d90632" 00:20:09.157 ], 00:20:09.157 "product_name": "Raid Volume", 00:20:09.157 "block_size": 4128, 00:20:09.157 "num_blocks": 7936, 00:20:09.157 "uuid": "cd049a5d-d5ef-4ec0-bc1c-1116a0d90632", 00:20:09.157 "md_size": 32, 00:20:09.157 "md_interleave": true, 00:20:09.157 "dif_type": 0, 00:20:09.157 "assigned_rate_limits": { 00:20:09.157 "rw_ios_per_sec": 0, 00:20:09.157 "rw_mbytes_per_sec": 0, 00:20:09.157 "r_mbytes_per_sec": 0, 00:20:09.157 "w_mbytes_per_sec": 0 00:20:09.157 }, 00:20:09.157 "claimed": false, 00:20:09.157 "zoned": false, 00:20:09.157 "supported_io_types": { 00:20:09.157 "read": true, 00:20:09.157 "write": true, 00:20:09.157 "unmap": false, 00:20:09.157 "flush": false, 00:20:09.157 "reset": true, 00:20:09.157 "nvme_admin": false, 00:20:09.157 "nvme_io": false, 00:20:09.157 "nvme_io_md": false, 00:20:09.157 "write_zeroes": true, 00:20:09.157 "zcopy": false, 00:20:09.157 "get_zone_info": false, 00:20:09.157 "zone_management": false, 00:20:09.157 "zone_append": false, 00:20:09.157 "compare": false, 00:20:09.157 "compare_and_write": false, 00:20:09.157 "abort": false, 00:20:09.157 "seek_hole": false, 00:20:09.157 "seek_data": false, 00:20:09.157 "copy": false, 00:20:09.157 "nvme_iov_md": false 00:20:09.157 }, 00:20:09.157 "memory_domains": [ 00:20:09.157 { 00:20:09.157 "dma_device_id": "system", 00:20:09.157 "dma_device_type": 1 00:20:09.157 }, 00:20:09.157 { 00:20:09.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.157 "dma_device_type": 2 00:20:09.157 }, 00:20:09.157 { 00:20:09.157 "dma_device_id": "system", 00:20:09.157 "dma_device_type": 1 00:20:09.157 }, 00:20:09.157 { 00:20:09.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.157 "dma_device_type": 2 00:20:09.157 } 00:20:09.157 ], 00:20:09.157 "driver_specific": { 00:20:09.157 "raid": { 00:20:09.157 "uuid": "cd049a5d-d5ef-4ec0-bc1c-1116a0d90632", 00:20:09.157 "strip_size_kb": 0, 00:20:09.157 "state": "online", 00:20:09.157 "raid_level": "raid1", 00:20:09.157 "superblock": true, 00:20:09.157 "num_base_bdevs": 2, 00:20:09.157 "num_base_bdevs_discovered": 2, 00:20:09.157 "num_base_bdevs_operational": 2, 00:20:09.157 "base_bdevs_list": [ 00:20:09.157 { 00:20:09.157 "name": "pt1", 00:20:09.157 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:09.157 "is_configured": true, 00:20:09.157 "data_offset": 256, 00:20:09.157 "data_size": 7936 00:20:09.157 }, 00:20:09.157 { 00:20:09.157 "name": "pt2", 00:20:09.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:09.158 "is_configured": true, 00:20:09.158 "data_offset": 256, 00:20:09.158 "data_size": 7936 00:20:09.158 } 00:20:09.158 ] 00:20:09.158 } 00:20:09.158 } 00:20:09.158 }' 00:20:09.158 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:09.158 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:09.158 pt2' 00:20:09.158 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:09.158 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:09.158 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:09.158 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:09.158 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.158 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.158 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:09.158 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.158 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:09.158 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:09.158 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:09.158 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:09.158 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.158 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.158 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:09.158 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.417 [2024-11-26 19:09:35.805774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' cd049a5d-d5ef-4ec0-bc1c-1116a0d90632 '!=' cd049a5d-d5ef-4ec0-bc1c-1116a0d90632 ']' 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.417 [2024-11-26 19:09:35.853524] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.417 "name": "raid_bdev1", 00:20:09.417 "uuid": "cd049a5d-d5ef-4ec0-bc1c-1116a0d90632", 00:20:09.417 "strip_size_kb": 0, 00:20:09.417 "state": "online", 00:20:09.417 "raid_level": "raid1", 00:20:09.417 "superblock": true, 00:20:09.417 "num_base_bdevs": 2, 00:20:09.417 "num_base_bdevs_discovered": 1, 00:20:09.417 "num_base_bdevs_operational": 1, 00:20:09.417 "base_bdevs_list": [ 00:20:09.417 { 00:20:09.417 "name": null, 00:20:09.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.417 "is_configured": false, 00:20:09.417 "data_offset": 0, 00:20:09.417 "data_size": 7936 00:20:09.417 }, 00:20:09.417 { 00:20:09.417 "name": "pt2", 00:20:09.417 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:09.417 "is_configured": true, 00:20:09.417 "data_offset": 256, 00:20:09.417 "data_size": 7936 00:20:09.417 } 00:20:09.417 ] 00:20:09.417 }' 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.417 19:09:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.985 [2024-11-26 19:09:36.385594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:09.985 [2024-11-26 19:09:36.385636] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:09.985 [2024-11-26 19:09:36.385745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:09.985 [2024-11-26 19:09:36.385845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:09.985 [2024-11-26 19:09:36.385867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.985 [2024-11-26 19:09:36.465604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:09.985 [2024-11-26 19:09:36.465675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.985 [2024-11-26 19:09:36.465701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:09.985 [2024-11-26 19:09:36.465719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.985 [2024-11-26 19:09:36.468552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.985 [2024-11-26 19:09:36.468609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:09.985 [2024-11-26 19:09:36.468688] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:09.985 [2024-11-26 19:09:36.468760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:09.985 [2024-11-26 19:09:36.468873] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:09.985 [2024-11-26 19:09:36.468897] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:09.985 [2024-11-26 19:09:36.469011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:09.985 [2024-11-26 19:09:36.469106] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:09.985 [2024-11-26 19:09:36.469121] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:09.985 [2024-11-26 19:09:36.469218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.985 pt2 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.985 "name": "raid_bdev1", 00:20:09.985 "uuid": "cd049a5d-d5ef-4ec0-bc1c-1116a0d90632", 00:20:09.985 "strip_size_kb": 0, 00:20:09.985 "state": "online", 00:20:09.985 "raid_level": "raid1", 00:20:09.985 "superblock": true, 00:20:09.985 "num_base_bdevs": 2, 00:20:09.985 "num_base_bdevs_discovered": 1, 00:20:09.985 "num_base_bdevs_operational": 1, 00:20:09.985 "base_bdevs_list": [ 00:20:09.985 { 00:20:09.985 "name": null, 00:20:09.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.985 "is_configured": false, 00:20:09.985 "data_offset": 256, 00:20:09.985 "data_size": 7936 00:20:09.985 }, 00:20:09.985 { 00:20:09.985 "name": "pt2", 00:20:09.985 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:09.985 "is_configured": true, 00:20:09.985 "data_offset": 256, 00:20:09.985 "data_size": 7936 00:20:09.985 } 00:20:09.985 ] 00:20:09.985 }' 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.985 19:09:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.553 [2024-11-26 19:09:37.033801] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:10.553 [2024-11-26 19:09:37.033858] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:10.553 [2024-11-26 19:09:37.033975] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:10.553 [2024-11-26 19:09:37.034053] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:10.553 [2024-11-26 19:09:37.034069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.553 [2024-11-26 19:09:37.097870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:10.553 [2024-11-26 19:09:37.097971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.553 [2024-11-26 19:09:37.098007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:10.553 [2024-11-26 19:09:37.098023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.553 [2024-11-26 19:09:37.101054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.553 [2024-11-26 19:09:37.101100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:10.553 [2024-11-26 19:09:37.101208] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:10.553 [2024-11-26 19:09:37.101279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:10.553 [2024-11-26 19:09:37.101443] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:10.553 [2024-11-26 19:09:37.101462] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:10.553 [2024-11-26 19:09:37.101490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:10.553 [2024-11-26 19:09:37.101564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:10.553 [2024-11-26 19:09:37.101689] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:10.553 [2024-11-26 19:09:37.101705] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:10.553 [2024-11-26 19:09:37.101815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:10.553 [2024-11-26 19:09:37.101902] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:10.553 [2024-11-26 19:09:37.101921] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:10.553 [2024-11-26 19:09:37.102086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.553 pt1 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.553 "name": "raid_bdev1", 00:20:10.553 "uuid": "cd049a5d-d5ef-4ec0-bc1c-1116a0d90632", 00:20:10.553 "strip_size_kb": 0, 00:20:10.553 "state": "online", 00:20:10.553 "raid_level": "raid1", 00:20:10.553 "superblock": true, 00:20:10.553 "num_base_bdevs": 2, 00:20:10.553 "num_base_bdevs_discovered": 1, 00:20:10.553 "num_base_bdevs_operational": 1, 00:20:10.553 "base_bdevs_list": [ 00:20:10.553 { 00:20:10.553 "name": null, 00:20:10.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.553 "is_configured": false, 00:20:10.553 "data_offset": 256, 00:20:10.553 "data_size": 7936 00:20:10.553 }, 00:20:10.553 { 00:20:10.553 "name": "pt2", 00:20:10.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:10.553 "is_configured": true, 00:20:10.553 "data_offset": 256, 00:20:10.553 "data_size": 7936 00:20:10.553 } 00:20:10.553 ] 00:20:10.553 }' 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.553 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.121 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:11.121 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:11.121 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.121 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.121 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.121 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:11.121 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:11.122 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:11.122 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.122 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.122 [2024-11-26 19:09:37.658655] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:11.122 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.122 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' cd049a5d-d5ef-4ec0-bc1c-1116a0d90632 '!=' cd049a5d-d5ef-4ec0-bc1c-1116a0d90632 ']' 00:20:11.122 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89703 00:20:11.122 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89703 ']' 00:20:11.122 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89703 00:20:11.122 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:11.122 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:11.122 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89703 00:20:11.122 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:11.122 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:11.122 killing process with pid 89703 00:20:11.122 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89703' 00:20:11.122 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89703 00:20:11.122 [2024-11-26 19:09:37.732306] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:11.122 [2024-11-26 19:09:37.732460] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:11.122 [2024-11-26 19:09:37.732540] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:11.122 [2024-11-26 19:09:37.732565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:11.122 19:09:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89703 00:20:11.380 [2024-11-26 19:09:37.938556] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:12.758 19:09:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:20:12.758 00:20:12.758 real 0m6.991s 00:20:12.758 user 0m10.943s 00:20:12.758 sys 0m1.110s 00:20:12.758 19:09:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:12.758 ************************************ 00:20:12.758 END TEST raid_superblock_test_md_interleaved 00:20:12.758 ************************************ 00:20:12.758 19:09:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.758 19:09:39 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:20:12.758 19:09:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:12.758 19:09:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:12.758 19:09:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:12.758 ************************************ 00:20:12.758 START TEST raid_rebuild_test_sb_md_interleaved 00:20:12.758 ************************************ 00:20:12.758 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:20:12.758 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:12.758 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:12.758 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:12.758 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:12.758 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:20:12.758 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:12.758 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:12.758 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:12.758 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:12.758 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:12.758 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:12.758 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:12.758 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:12.758 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:12.758 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:12.758 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:12.758 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:12.758 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:12.758 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:12.758 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:12.759 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:12.759 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:12.759 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:12.759 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:12.759 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=90036 00:20:12.759 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 90036 00:20:12.759 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 90036 ']' 00:20:12.759 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.759 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:12.759 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.759 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.759 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.759 19:09:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.759 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:12.759 Zero copy mechanism will not be used. 00:20:12.759 [2024-11-26 19:09:39.273514] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:20:12.759 [2024-11-26 19:09:39.273689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90036 ] 00:20:13.017 [2024-11-26 19:09:39.466791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.281 [2024-11-26 19:09:39.683144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.550 [2024-11-26 19:09:39.913574] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:13.550 [2024-11-26 19:09:39.913642] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.809 BaseBdev1_malloc 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.809 [2024-11-26 19:09:40.297667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:13.809 [2024-11-26 19:09:40.297746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.809 [2024-11-26 19:09:40.297777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:13.809 [2024-11-26 19:09:40.297811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.809 [2024-11-26 19:09:40.300386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.809 [2024-11-26 19:09:40.300457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:13.809 BaseBdev1 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.809 BaseBdev2_malloc 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.809 [2024-11-26 19:09:40.354894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:13.809 [2024-11-26 19:09:40.355017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.809 [2024-11-26 19:09:40.355054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:13.809 [2024-11-26 19:09:40.355094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.809 [2024-11-26 19:09:40.357725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.809 [2024-11-26 19:09:40.357782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:13.809 BaseBdev2 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.809 spare_malloc 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.809 spare_delay 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.809 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.809 [2024-11-26 19:09:40.427575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:13.809 [2024-11-26 19:09:40.427663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.809 [2024-11-26 19:09:40.427695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:13.809 [2024-11-26 19:09:40.427713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.068 [2024-11-26 19:09:40.430390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.068 [2024-11-26 19:09:40.430437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:14.068 spare 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.068 [2024-11-26 19:09:40.435595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:14.068 [2024-11-26 19:09:40.438109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:14.068 [2024-11-26 19:09:40.438404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:14.068 [2024-11-26 19:09:40.438429] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:14.068 [2024-11-26 19:09:40.438529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:14.068 [2024-11-26 19:09:40.438633] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:14.068 [2024-11-26 19:09:40.438648] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:14.068 [2024-11-26 19:09:40.438744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.068 "name": "raid_bdev1", 00:20:14.068 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:14.068 "strip_size_kb": 0, 00:20:14.068 "state": "online", 00:20:14.068 "raid_level": "raid1", 00:20:14.068 "superblock": true, 00:20:14.068 "num_base_bdevs": 2, 00:20:14.068 "num_base_bdevs_discovered": 2, 00:20:14.068 "num_base_bdevs_operational": 2, 00:20:14.068 "base_bdevs_list": [ 00:20:14.068 { 00:20:14.068 "name": "BaseBdev1", 00:20:14.068 "uuid": "8768fb38-0bda-518a-b488-77340fabd11d", 00:20:14.068 "is_configured": true, 00:20:14.068 "data_offset": 256, 00:20:14.068 "data_size": 7936 00:20:14.068 }, 00:20:14.068 { 00:20:14.068 "name": "BaseBdev2", 00:20:14.068 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:14.068 "is_configured": true, 00:20:14.068 "data_offset": 256, 00:20:14.068 "data_size": 7936 00:20:14.068 } 00:20:14.068 ] 00:20:14.068 }' 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.068 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.327 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:14.327 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:14.327 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.327 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.327 [2024-11-26 19:09:40.928335] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:14.585 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.585 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:14.585 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.585 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.585 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.585 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:14.585 19:09:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.585 [2024-11-26 19:09:41.023831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.585 "name": "raid_bdev1", 00:20:14.585 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:14.585 "strip_size_kb": 0, 00:20:14.585 "state": "online", 00:20:14.585 "raid_level": "raid1", 00:20:14.585 "superblock": true, 00:20:14.585 "num_base_bdevs": 2, 00:20:14.585 "num_base_bdevs_discovered": 1, 00:20:14.585 "num_base_bdevs_operational": 1, 00:20:14.585 "base_bdevs_list": [ 00:20:14.585 { 00:20:14.585 "name": null, 00:20:14.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.585 "is_configured": false, 00:20:14.585 "data_offset": 0, 00:20:14.585 "data_size": 7936 00:20:14.585 }, 00:20:14.585 { 00:20:14.585 "name": "BaseBdev2", 00:20:14.585 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:14.585 "is_configured": true, 00:20:14.585 "data_offset": 256, 00:20:14.585 "data_size": 7936 00:20:14.585 } 00:20:14.585 ] 00:20:14.585 }' 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.585 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.152 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:15.152 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.152 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.152 [2024-11-26 19:09:41.471942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:15.152 [2024-11-26 19:09:41.489980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:15.152 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.152 19:09:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:15.152 [2024-11-26 19:09:41.492752] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:16.090 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.090 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.090 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:16.090 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:16.090 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.090 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.090 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.090 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.090 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.090 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.090 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.090 "name": "raid_bdev1", 00:20:16.090 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:16.090 "strip_size_kb": 0, 00:20:16.090 "state": "online", 00:20:16.090 "raid_level": "raid1", 00:20:16.090 "superblock": true, 00:20:16.090 "num_base_bdevs": 2, 00:20:16.090 "num_base_bdevs_discovered": 2, 00:20:16.090 "num_base_bdevs_operational": 2, 00:20:16.090 "process": { 00:20:16.090 "type": "rebuild", 00:20:16.090 "target": "spare", 00:20:16.090 "progress": { 00:20:16.090 "blocks": 2560, 00:20:16.090 "percent": 32 00:20:16.090 } 00:20:16.090 }, 00:20:16.090 "base_bdevs_list": [ 00:20:16.090 { 00:20:16.090 "name": "spare", 00:20:16.090 "uuid": "9d18cc88-b39a-5758-a377-807f7bfab33a", 00:20:16.090 "is_configured": true, 00:20:16.090 "data_offset": 256, 00:20:16.090 "data_size": 7936 00:20:16.090 }, 00:20:16.090 { 00:20:16.090 "name": "BaseBdev2", 00:20:16.090 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:16.090 "is_configured": true, 00:20:16.090 "data_offset": 256, 00:20:16.090 "data_size": 7936 00:20:16.090 } 00:20:16.090 ] 00:20:16.090 }' 00:20:16.090 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.090 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:16.090 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:16.090 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:16.090 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:16.090 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.090 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.090 [2024-11-26 19:09:42.670855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:16.090 [2024-11-26 19:09:42.704685] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:16.090 [2024-11-26 19:09:42.704787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.090 [2024-11-26 19:09:42.704824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:16.090 [2024-11-26 19:09:42.704845] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:16.349 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.349 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:16.349 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:16.349 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:16.349 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:16.349 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:16.349 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:16.349 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.349 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.349 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.349 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.349 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.349 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.349 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.349 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.349 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.349 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.349 "name": "raid_bdev1", 00:20:16.349 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:16.349 "strip_size_kb": 0, 00:20:16.349 "state": "online", 00:20:16.349 "raid_level": "raid1", 00:20:16.349 "superblock": true, 00:20:16.349 "num_base_bdevs": 2, 00:20:16.349 "num_base_bdevs_discovered": 1, 00:20:16.349 "num_base_bdevs_operational": 1, 00:20:16.349 "base_bdevs_list": [ 00:20:16.349 { 00:20:16.349 "name": null, 00:20:16.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.349 "is_configured": false, 00:20:16.349 "data_offset": 0, 00:20:16.349 "data_size": 7936 00:20:16.349 }, 00:20:16.349 { 00:20:16.349 "name": "BaseBdev2", 00:20:16.349 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:16.349 "is_configured": true, 00:20:16.349 "data_offset": 256, 00:20:16.349 "data_size": 7936 00:20:16.349 } 00:20:16.349 ] 00:20:16.349 }' 00:20:16.349 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.349 19:09:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.916 19:09:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:16.916 19:09:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.916 19:09:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:16.916 19:09:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:16.916 19:09:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.916 19:09:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.916 19:09:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.916 19:09:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.916 19:09:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.916 19:09:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.916 19:09:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.916 "name": "raid_bdev1", 00:20:16.916 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:16.916 "strip_size_kb": 0, 00:20:16.916 "state": "online", 00:20:16.916 "raid_level": "raid1", 00:20:16.916 "superblock": true, 00:20:16.916 "num_base_bdevs": 2, 00:20:16.916 "num_base_bdevs_discovered": 1, 00:20:16.916 "num_base_bdevs_operational": 1, 00:20:16.916 "base_bdevs_list": [ 00:20:16.916 { 00:20:16.916 "name": null, 00:20:16.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.916 "is_configured": false, 00:20:16.916 "data_offset": 0, 00:20:16.916 "data_size": 7936 00:20:16.916 }, 00:20:16.916 { 00:20:16.916 "name": "BaseBdev2", 00:20:16.916 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:16.916 "is_configured": true, 00:20:16.916 "data_offset": 256, 00:20:16.916 "data_size": 7936 00:20:16.916 } 00:20:16.916 ] 00:20:16.916 }' 00:20:16.916 19:09:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.916 19:09:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:16.916 19:09:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:16.916 19:09:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:16.916 19:09:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:16.916 19:09:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.916 19:09:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.916 [2024-11-26 19:09:43.407533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:16.916 [2024-11-26 19:09:43.424594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:16.916 19:09:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.916 19:09:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:16.916 [2024-11-26 19:09:43.427372] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:17.852 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:17.852 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:17.852 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:17.853 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:17.853 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:17.853 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.853 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.853 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.853 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.853 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.112 "name": "raid_bdev1", 00:20:18.112 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:18.112 "strip_size_kb": 0, 00:20:18.112 "state": "online", 00:20:18.112 "raid_level": "raid1", 00:20:18.112 "superblock": true, 00:20:18.112 "num_base_bdevs": 2, 00:20:18.112 "num_base_bdevs_discovered": 2, 00:20:18.112 "num_base_bdevs_operational": 2, 00:20:18.112 "process": { 00:20:18.112 "type": "rebuild", 00:20:18.112 "target": "spare", 00:20:18.112 "progress": { 00:20:18.112 "blocks": 2560, 00:20:18.112 "percent": 32 00:20:18.112 } 00:20:18.112 }, 00:20:18.112 "base_bdevs_list": [ 00:20:18.112 { 00:20:18.112 "name": "spare", 00:20:18.112 "uuid": "9d18cc88-b39a-5758-a377-807f7bfab33a", 00:20:18.112 "is_configured": true, 00:20:18.112 "data_offset": 256, 00:20:18.112 "data_size": 7936 00:20:18.112 }, 00:20:18.112 { 00:20:18.112 "name": "BaseBdev2", 00:20:18.112 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:18.112 "is_configured": true, 00:20:18.112 "data_offset": 256, 00:20:18.112 "data_size": 7936 00:20:18.112 } 00:20:18.112 ] 00:20:18.112 }' 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:18.112 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=822 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.112 "name": "raid_bdev1", 00:20:18.112 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:18.112 "strip_size_kb": 0, 00:20:18.112 "state": "online", 00:20:18.112 "raid_level": "raid1", 00:20:18.112 "superblock": true, 00:20:18.112 "num_base_bdevs": 2, 00:20:18.112 "num_base_bdevs_discovered": 2, 00:20:18.112 "num_base_bdevs_operational": 2, 00:20:18.112 "process": { 00:20:18.112 "type": "rebuild", 00:20:18.112 "target": "spare", 00:20:18.112 "progress": { 00:20:18.112 "blocks": 2816, 00:20:18.112 "percent": 35 00:20:18.112 } 00:20:18.112 }, 00:20:18.112 "base_bdevs_list": [ 00:20:18.112 { 00:20:18.112 "name": "spare", 00:20:18.112 "uuid": "9d18cc88-b39a-5758-a377-807f7bfab33a", 00:20:18.112 "is_configured": true, 00:20:18.112 "data_offset": 256, 00:20:18.112 "data_size": 7936 00:20:18.112 }, 00:20:18.112 { 00:20:18.112 "name": "BaseBdev2", 00:20:18.112 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:18.112 "is_configured": true, 00:20:18.112 "data_offset": 256, 00:20:18.112 "data_size": 7936 00:20:18.112 } 00:20:18.112 ] 00:20:18.112 }' 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:18.112 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.371 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:18.371 19:09:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:19.308 19:09:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:19.308 19:09:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:19.308 19:09:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.308 19:09:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:19.308 19:09:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:19.308 19:09:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.308 19:09:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.308 19:09:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.308 19:09:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.308 19:09:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.308 19:09:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.308 19:09:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.308 "name": "raid_bdev1", 00:20:19.308 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:19.308 "strip_size_kb": 0, 00:20:19.308 "state": "online", 00:20:19.308 "raid_level": "raid1", 00:20:19.308 "superblock": true, 00:20:19.308 "num_base_bdevs": 2, 00:20:19.308 "num_base_bdevs_discovered": 2, 00:20:19.308 "num_base_bdevs_operational": 2, 00:20:19.308 "process": { 00:20:19.308 "type": "rebuild", 00:20:19.308 "target": "spare", 00:20:19.308 "progress": { 00:20:19.308 "blocks": 5888, 00:20:19.308 "percent": 74 00:20:19.308 } 00:20:19.308 }, 00:20:19.308 "base_bdevs_list": [ 00:20:19.308 { 00:20:19.308 "name": "spare", 00:20:19.308 "uuid": "9d18cc88-b39a-5758-a377-807f7bfab33a", 00:20:19.308 "is_configured": true, 00:20:19.308 "data_offset": 256, 00:20:19.308 "data_size": 7936 00:20:19.308 }, 00:20:19.308 { 00:20:19.308 "name": "BaseBdev2", 00:20:19.308 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:19.308 "is_configured": true, 00:20:19.308 "data_offset": 256, 00:20:19.308 "data_size": 7936 00:20:19.308 } 00:20:19.308 ] 00:20:19.308 }' 00:20:19.308 19:09:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.308 19:09:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:19.308 19:09:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.308 19:09:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:19.308 19:09:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:20.244 [2024-11-26 19:09:46.557774] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:20.244 [2024-11-26 19:09:46.557907] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:20.244 [2024-11-26 19:09:46.558105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:20.503 19:09:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:20.503 19:09:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.503 19:09:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.503 19:09:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:20.503 19:09:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:20.503 19:09:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.503 19:09:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.503 19:09:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.504 19:09:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.504 19:09:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.504 19:09:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.504 19:09:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.504 "name": "raid_bdev1", 00:20:20.504 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:20.504 "strip_size_kb": 0, 00:20:20.504 "state": "online", 00:20:20.504 "raid_level": "raid1", 00:20:20.504 "superblock": true, 00:20:20.504 "num_base_bdevs": 2, 00:20:20.504 "num_base_bdevs_discovered": 2, 00:20:20.504 "num_base_bdevs_operational": 2, 00:20:20.504 "base_bdevs_list": [ 00:20:20.504 { 00:20:20.504 "name": "spare", 00:20:20.504 "uuid": "9d18cc88-b39a-5758-a377-807f7bfab33a", 00:20:20.504 "is_configured": true, 00:20:20.504 "data_offset": 256, 00:20:20.504 "data_size": 7936 00:20:20.504 }, 00:20:20.504 { 00:20:20.504 "name": "BaseBdev2", 00:20:20.504 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:20.504 "is_configured": true, 00:20:20.504 "data_offset": 256, 00:20:20.504 "data_size": 7936 00:20:20.504 } 00:20:20.504 ] 00:20:20.504 }' 00:20:20.504 19:09:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.504 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:20.504 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:20.504 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:20.504 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:20:20.504 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:20.504 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.504 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:20.504 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:20.504 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.504 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.504 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.504 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.504 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.504 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.504 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.504 "name": "raid_bdev1", 00:20:20.504 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:20.504 "strip_size_kb": 0, 00:20:20.504 "state": "online", 00:20:20.504 "raid_level": "raid1", 00:20:20.504 "superblock": true, 00:20:20.504 "num_base_bdevs": 2, 00:20:20.504 "num_base_bdevs_discovered": 2, 00:20:20.504 "num_base_bdevs_operational": 2, 00:20:20.504 "base_bdevs_list": [ 00:20:20.504 { 00:20:20.504 "name": "spare", 00:20:20.504 "uuid": "9d18cc88-b39a-5758-a377-807f7bfab33a", 00:20:20.504 "is_configured": true, 00:20:20.504 "data_offset": 256, 00:20:20.504 "data_size": 7936 00:20:20.504 }, 00:20:20.504 { 00:20:20.504 "name": "BaseBdev2", 00:20:20.504 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:20.504 "is_configured": true, 00:20:20.504 "data_offset": 256, 00:20:20.504 "data_size": 7936 00:20:20.504 } 00:20:20.504 ] 00:20:20.504 }' 00:20:20.504 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.763 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:20.763 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:20.763 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:20.763 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:20.763 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:20.763 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:20.763 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:20.763 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:20.763 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:20.763 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:20.763 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:20.763 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:20.763 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:20.763 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.763 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.763 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.763 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.763 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.763 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:20.763 "name": "raid_bdev1", 00:20:20.763 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:20.763 "strip_size_kb": 0, 00:20:20.763 "state": "online", 00:20:20.763 "raid_level": "raid1", 00:20:20.763 "superblock": true, 00:20:20.763 "num_base_bdevs": 2, 00:20:20.763 "num_base_bdevs_discovered": 2, 00:20:20.763 "num_base_bdevs_operational": 2, 00:20:20.763 "base_bdevs_list": [ 00:20:20.763 { 00:20:20.763 "name": "spare", 00:20:20.763 "uuid": "9d18cc88-b39a-5758-a377-807f7bfab33a", 00:20:20.763 "is_configured": true, 00:20:20.763 "data_offset": 256, 00:20:20.763 "data_size": 7936 00:20:20.763 }, 00:20:20.763 { 00:20:20.763 "name": "BaseBdev2", 00:20:20.763 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:20.763 "is_configured": true, 00:20:20.763 "data_offset": 256, 00:20:20.763 "data_size": 7936 00:20:20.763 } 00:20:20.763 ] 00:20:20.763 }' 00:20:20.763 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:20.763 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.331 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.332 [2024-11-26 19:09:47.702645] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:21.332 [2024-11-26 19:09:47.702866] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:21.332 [2024-11-26 19:09:47.703138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:21.332 [2024-11-26 19:09:47.703379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:21.332 [2024-11-26 19:09:47.703412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.332 [2024-11-26 19:09:47.774603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:21.332 [2024-11-26 19:09:47.774679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.332 [2024-11-26 19:09:47.774714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:21.332 [2024-11-26 19:09:47.774730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.332 [2024-11-26 19:09:47.777552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.332 [2024-11-26 19:09:47.777762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:21.332 [2024-11-26 19:09:47.777868] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:21.332 [2024-11-26 19:09:47.777942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:21.332 [2024-11-26 19:09:47.778104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:21.332 spare 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.332 [2024-11-26 19:09:47.878249] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:21.332 [2024-11-26 19:09:47.878352] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:21.332 [2024-11-26 19:09:47.878562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:21.332 [2024-11-26 19:09:47.878751] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:21.332 [2024-11-26 19:09:47.878771] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:21.332 [2024-11-26 19:09:47.878960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.332 "name": "raid_bdev1", 00:20:21.332 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:21.332 "strip_size_kb": 0, 00:20:21.332 "state": "online", 00:20:21.332 "raid_level": "raid1", 00:20:21.332 "superblock": true, 00:20:21.332 "num_base_bdevs": 2, 00:20:21.332 "num_base_bdevs_discovered": 2, 00:20:21.332 "num_base_bdevs_operational": 2, 00:20:21.332 "base_bdevs_list": [ 00:20:21.332 { 00:20:21.332 "name": "spare", 00:20:21.332 "uuid": "9d18cc88-b39a-5758-a377-807f7bfab33a", 00:20:21.332 "is_configured": true, 00:20:21.332 "data_offset": 256, 00:20:21.332 "data_size": 7936 00:20:21.332 }, 00:20:21.332 { 00:20:21.332 "name": "BaseBdev2", 00:20:21.332 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:21.332 "is_configured": true, 00:20:21.332 "data_offset": 256, 00:20:21.332 "data_size": 7936 00:20:21.332 } 00:20:21.332 ] 00:20:21.332 }' 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.332 19:09:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.899 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:21.899 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.899 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:21.899 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:21.899 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.899 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.899 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.899 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.899 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.899 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.899 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.899 "name": "raid_bdev1", 00:20:21.899 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:21.899 "strip_size_kb": 0, 00:20:21.899 "state": "online", 00:20:21.899 "raid_level": "raid1", 00:20:21.899 "superblock": true, 00:20:21.899 "num_base_bdevs": 2, 00:20:21.899 "num_base_bdevs_discovered": 2, 00:20:21.899 "num_base_bdevs_operational": 2, 00:20:21.899 "base_bdevs_list": [ 00:20:21.899 { 00:20:21.899 "name": "spare", 00:20:21.900 "uuid": "9d18cc88-b39a-5758-a377-807f7bfab33a", 00:20:21.900 "is_configured": true, 00:20:21.900 "data_offset": 256, 00:20:21.900 "data_size": 7936 00:20:21.900 }, 00:20:21.900 { 00:20:21.900 "name": "BaseBdev2", 00:20:21.900 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:21.900 "is_configured": true, 00:20:21.900 "data_offset": 256, 00:20:21.900 "data_size": 7936 00:20:21.900 } 00:20:21.900 ] 00:20:21.900 }' 00:20:21.900 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.159 [2024-11-26 19:09:48.647259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.159 "name": "raid_bdev1", 00:20:22.159 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:22.159 "strip_size_kb": 0, 00:20:22.159 "state": "online", 00:20:22.159 "raid_level": "raid1", 00:20:22.159 "superblock": true, 00:20:22.159 "num_base_bdevs": 2, 00:20:22.159 "num_base_bdevs_discovered": 1, 00:20:22.159 "num_base_bdevs_operational": 1, 00:20:22.159 "base_bdevs_list": [ 00:20:22.159 { 00:20:22.159 "name": null, 00:20:22.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.159 "is_configured": false, 00:20:22.159 "data_offset": 0, 00:20:22.159 "data_size": 7936 00:20:22.159 }, 00:20:22.159 { 00:20:22.159 "name": "BaseBdev2", 00:20:22.159 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:22.159 "is_configured": true, 00:20:22.159 "data_offset": 256, 00:20:22.159 "data_size": 7936 00:20:22.159 } 00:20:22.159 ] 00:20:22.159 }' 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.159 19:09:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.728 19:09:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:22.728 19:09:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.728 19:09:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.728 [2024-11-26 19:09:49.183417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:22.728 [2024-11-26 19:09:49.183708] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:22.728 [2024-11-26 19:09:49.183738] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:22.728 [2024-11-26 19:09:49.183806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:22.728 [2024-11-26 19:09:49.200606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:22.728 19:09:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.728 19:09:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:22.728 [2024-11-26 19:09:49.203356] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:23.663 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:23.663 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.663 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:23.663 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:23.663 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.664 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.664 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.664 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.664 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.664 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.664 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.664 "name": "raid_bdev1", 00:20:23.664 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:23.664 "strip_size_kb": 0, 00:20:23.664 "state": "online", 00:20:23.664 "raid_level": "raid1", 00:20:23.664 "superblock": true, 00:20:23.664 "num_base_bdevs": 2, 00:20:23.664 "num_base_bdevs_discovered": 2, 00:20:23.664 "num_base_bdevs_operational": 2, 00:20:23.664 "process": { 00:20:23.664 "type": "rebuild", 00:20:23.664 "target": "spare", 00:20:23.664 "progress": { 00:20:23.664 "blocks": 2304, 00:20:23.664 "percent": 29 00:20:23.664 } 00:20:23.664 }, 00:20:23.664 "base_bdevs_list": [ 00:20:23.664 { 00:20:23.664 "name": "spare", 00:20:23.664 "uuid": "9d18cc88-b39a-5758-a377-807f7bfab33a", 00:20:23.664 "is_configured": true, 00:20:23.664 "data_offset": 256, 00:20:23.664 "data_size": 7936 00:20:23.664 }, 00:20:23.664 { 00:20:23.664 "name": "BaseBdev2", 00:20:23.664 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:23.664 "is_configured": true, 00:20:23.664 "data_offset": 256, 00:20:23.664 "data_size": 7936 00:20:23.664 } 00:20:23.664 ] 00:20:23.664 }' 00:20:23.664 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.923 [2024-11-26 19:09:50.393693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:23.923 [2024-11-26 19:09:50.415810] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:23.923 [2024-11-26 19:09:50.415910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.923 [2024-11-26 19:09:50.415936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:23.923 [2024-11-26 19:09:50.415951] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.923 "name": "raid_bdev1", 00:20:23.923 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:23.923 "strip_size_kb": 0, 00:20:23.923 "state": "online", 00:20:23.923 "raid_level": "raid1", 00:20:23.923 "superblock": true, 00:20:23.923 "num_base_bdevs": 2, 00:20:23.923 "num_base_bdevs_discovered": 1, 00:20:23.923 "num_base_bdevs_operational": 1, 00:20:23.923 "base_bdevs_list": [ 00:20:23.923 { 00:20:23.923 "name": null, 00:20:23.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.923 "is_configured": false, 00:20:23.923 "data_offset": 0, 00:20:23.923 "data_size": 7936 00:20:23.923 }, 00:20:23.923 { 00:20:23.923 "name": "BaseBdev2", 00:20:23.923 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:23.923 "is_configured": true, 00:20:23.923 "data_offset": 256, 00:20:23.923 "data_size": 7936 00:20:23.923 } 00:20:23.923 ] 00:20:23.923 }' 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.923 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.491 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:24.491 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.491 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.491 [2024-11-26 19:09:50.954140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:24.491 [2024-11-26 19:09:50.954431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.491 [2024-11-26 19:09:50.954484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:24.491 [2024-11-26 19:09:50.954506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.491 [2024-11-26 19:09:50.954787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.491 [2024-11-26 19:09:50.954818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:24.491 [2024-11-26 19:09:50.954918] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:24.491 [2024-11-26 19:09:50.954949] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:24.491 [2024-11-26 19:09:50.954964] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:24.491 [2024-11-26 19:09:50.954996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:24.491 [2024-11-26 19:09:50.972465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:24.491 spare 00:20:24.491 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.491 19:09:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:24.491 [2024-11-26 19:09:50.975213] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:25.429 19:09:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:25.429 19:09:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:25.429 19:09:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:25.429 19:09:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:25.429 19:09:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:25.429 19:09:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.429 19:09:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.429 19:09:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.429 19:09:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.429 19:09:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.429 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:25.429 "name": "raid_bdev1", 00:20:25.429 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:25.429 "strip_size_kb": 0, 00:20:25.429 "state": "online", 00:20:25.429 "raid_level": "raid1", 00:20:25.429 "superblock": true, 00:20:25.429 "num_base_bdevs": 2, 00:20:25.429 "num_base_bdevs_discovered": 2, 00:20:25.429 "num_base_bdevs_operational": 2, 00:20:25.429 "process": { 00:20:25.429 "type": "rebuild", 00:20:25.429 "target": "spare", 00:20:25.429 "progress": { 00:20:25.429 "blocks": 2560, 00:20:25.429 "percent": 32 00:20:25.429 } 00:20:25.429 }, 00:20:25.429 "base_bdevs_list": [ 00:20:25.429 { 00:20:25.429 "name": "spare", 00:20:25.429 "uuid": "9d18cc88-b39a-5758-a377-807f7bfab33a", 00:20:25.429 "is_configured": true, 00:20:25.429 "data_offset": 256, 00:20:25.429 "data_size": 7936 00:20:25.429 }, 00:20:25.429 { 00:20:25.429 "name": "BaseBdev2", 00:20:25.429 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:25.429 "is_configured": true, 00:20:25.429 "data_offset": 256, 00:20:25.429 "data_size": 7936 00:20:25.429 } 00:20:25.429 ] 00:20:25.429 }' 00:20:25.429 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.688 [2024-11-26 19:09:52.133515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:25.688 [2024-11-26 19:09:52.187340] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:25.688 [2024-11-26 19:09:52.187467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.688 [2024-11-26 19:09:52.187499] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:25.688 [2024-11-26 19:09:52.187511] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.688 "name": "raid_bdev1", 00:20:25.688 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:25.688 "strip_size_kb": 0, 00:20:25.688 "state": "online", 00:20:25.688 "raid_level": "raid1", 00:20:25.688 "superblock": true, 00:20:25.688 "num_base_bdevs": 2, 00:20:25.688 "num_base_bdevs_discovered": 1, 00:20:25.688 "num_base_bdevs_operational": 1, 00:20:25.688 "base_bdevs_list": [ 00:20:25.688 { 00:20:25.688 "name": null, 00:20:25.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.688 "is_configured": false, 00:20:25.688 "data_offset": 0, 00:20:25.688 "data_size": 7936 00:20:25.688 }, 00:20:25.688 { 00:20:25.688 "name": "BaseBdev2", 00:20:25.688 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:25.688 "is_configured": true, 00:20:25.688 "data_offset": 256, 00:20:25.688 "data_size": 7936 00:20:25.688 } 00:20:25.688 ] 00:20:25.688 }' 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.688 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:26.255 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:26.255 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:26.255 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:26.255 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:26.255 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:26.255 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.255 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.255 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.255 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:26.255 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.255 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:26.255 "name": "raid_bdev1", 00:20:26.255 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:26.255 "strip_size_kb": 0, 00:20:26.255 "state": "online", 00:20:26.255 "raid_level": "raid1", 00:20:26.255 "superblock": true, 00:20:26.255 "num_base_bdevs": 2, 00:20:26.255 "num_base_bdevs_discovered": 1, 00:20:26.255 "num_base_bdevs_operational": 1, 00:20:26.255 "base_bdevs_list": [ 00:20:26.255 { 00:20:26.255 "name": null, 00:20:26.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.255 "is_configured": false, 00:20:26.255 "data_offset": 0, 00:20:26.255 "data_size": 7936 00:20:26.255 }, 00:20:26.255 { 00:20:26.255 "name": "BaseBdev2", 00:20:26.255 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:26.255 "is_configured": true, 00:20:26.255 "data_offset": 256, 00:20:26.255 "data_size": 7936 00:20:26.255 } 00:20:26.255 ] 00:20:26.255 }' 00:20:26.255 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:26.513 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:26.513 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:26.513 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:26.513 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:26.513 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.513 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:26.513 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.513 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:26.513 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.513 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:26.513 [2024-11-26 19:09:52.946937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:26.513 [2024-11-26 19:09:52.947020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.513 [2024-11-26 19:09:52.947057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:26.513 [2024-11-26 19:09:52.947072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.513 [2024-11-26 19:09:52.947369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.513 [2024-11-26 19:09:52.947395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:26.513 [2024-11-26 19:09:52.947471] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:26.513 [2024-11-26 19:09:52.947493] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:26.513 [2024-11-26 19:09:52.947507] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:26.513 [2024-11-26 19:09:52.947521] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:26.513 BaseBdev1 00:20:26.513 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.513 19:09:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:27.449 19:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:27.449 19:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:27.449 19:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:27.449 19:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:27.449 19:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:27.449 19:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:27.449 19:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.449 19:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.449 19:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.449 19:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.449 19:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.449 19:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.449 19:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.449 19:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.449 19:09:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.449 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.449 "name": "raid_bdev1", 00:20:27.449 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:27.449 "strip_size_kb": 0, 00:20:27.449 "state": "online", 00:20:27.449 "raid_level": "raid1", 00:20:27.449 "superblock": true, 00:20:27.449 "num_base_bdevs": 2, 00:20:27.449 "num_base_bdevs_discovered": 1, 00:20:27.449 "num_base_bdevs_operational": 1, 00:20:27.449 "base_bdevs_list": [ 00:20:27.449 { 00:20:27.449 "name": null, 00:20:27.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.449 "is_configured": false, 00:20:27.449 "data_offset": 0, 00:20:27.449 "data_size": 7936 00:20:27.449 }, 00:20:27.449 { 00:20:27.449 "name": "BaseBdev2", 00:20:27.449 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:27.449 "is_configured": true, 00:20:27.449 "data_offset": 256, 00:20:27.449 "data_size": 7936 00:20:27.449 } 00:20:27.449 ] 00:20:27.449 }' 00:20:27.449 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.449 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.016 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:28.016 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:28.016 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:28.016 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:28.016 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:28.016 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.016 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.016 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.016 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.016 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.016 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:28.016 "name": "raid_bdev1", 00:20:28.016 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:28.016 "strip_size_kb": 0, 00:20:28.016 "state": "online", 00:20:28.016 "raid_level": "raid1", 00:20:28.016 "superblock": true, 00:20:28.016 "num_base_bdevs": 2, 00:20:28.016 "num_base_bdevs_discovered": 1, 00:20:28.016 "num_base_bdevs_operational": 1, 00:20:28.016 "base_bdevs_list": [ 00:20:28.016 { 00:20:28.016 "name": null, 00:20:28.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.016 "is_configured": false, 00:20:28.016 "data_offset": 0, 00:20:28.016 "data_size": 7936 00:20:28.016 }, 00:20:28.016 { 00:20:28.016 "name": "BaseBdev2", 00:20:28.016 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:28.016 "is_configured": true, 00:20:28.016 "data_offset": 256, 00:20:28.016 "data_size": 7936 00:20:28.016 } 00:20:28.016 ] 00:20:28.016 }' 00:20:28.016 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:28.016 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:28.016 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:28.274 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:28.274 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:28.274 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:20:28.274 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:28.274 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:28.274 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.274 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:28.274 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.274 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:28.274 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.274 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.274 [2024-11-26 19:09:54.655553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:28.274 [2024-11-26 19:09:54.655954] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:28.275 [2024-11-26 19:09:54.655996] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:28.275 request: 00:20:28.275 { 00:20:28.275 "base_bdev": "BaseBdev1", 00:20:28.275 "raid_bdev": "raid_bdev1", 00:20:28.275 "method": "bdev_raid_add_base_bdev", 00:20:28.275 "req_id": 1 00:20:28.275 } 00:20:28.275 Got JSON-RPC error response 00:20:28.275 response: 00:20:28.275 { 00:20:28.275 "code": -22, 00:20:28.275 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:28.275 } 00:20:28.275 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:28.275 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:20:28.275 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:28.275 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:28.275 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:28.275 19:09:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:29.209 19:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:29.209 19:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:29.209 19:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:29.209 19:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:29.209 19:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:29.209 19:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:29.209 19:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.209 19:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.209 19:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.209 19:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.209 19:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.209 19:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.209 19:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.209 19:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.209 19:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.209 19:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.209 "name": "raid_bdev1", 00:20:29.209 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:29.209 "strip_size_kb": 0, 00:20:29.209 "state": "online", 00:20:29.209 "raid_level": "raid1", 00:20:29.209 "superblock": true, 00:20:29.209 "num_base_bdevs": 2, 00:20:29.209 "num_base_bdevs_discovered": 1, 00:20:29.209 "num_base_bdevs_operational": 1, 00:20:29.209 "base_bdevs_list": [ 00:20:29.209 { 00:20:29.209 "name": null, 00:20:29.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.209 "is_configured": false, 00:20:29.209 "data_offset": 0, 00:20:29.209 "data_size": 7936 00:20:29.209 }, 00:20:29.209 { 00:20:29.209 "name": "BaseBdev2", 00:20:29.209 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:29.209 "is_configured": true, 00:20:29.209 "data_offset": 256, 00:20:29.209 "data_size": 7936 00:20:29.209 } 00:20:29.209 ] 00:20:29.209 }' 00:20:29.209 19:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.209 19:09:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:29.774 "name": "raid_bdev1", 00:20:29.774 "uuid": "ed604d54-3bdc-4771-bef0-cbaadb501ae1", 00:20:29.774 "strip_size_kb": 0, 00:20:29.774 "state": "online", 00:20:29.774 "raid_level": "raid1", 00:20:29.774 "superblock": true, 00:20:29.774 "num_base_bdevs": 2, 00:20:29.774 "num_base_bdevs_discovered": 1, 00:20:29.774 "num_base_bdevs_operational": 1, 00:20:29.774 "base_bdevs_list": [ 00:20:29.774 { 00:20:29.774 "name": null, 00:20:29.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.774 "is_configured": false, 00:20:29.774 "data_offset": 0, 00:20:29.774 "data_size": 7936 00:20:29.774 }, 00:20:29.774 { 00:20:29.774 "name": "BaseBdev2", 00:20:29.774 "uuid": "3c6c8fa6-3c7a-57d8-b1a5-5b511e10b6a3", 00:20:29.774 "is_configured": true, 00:20:29.774 "data_offset": 256, 00:20:29.774 "data_size": 7936 00:20:29.774 } 00:20:29.774 ] 00:20:29.774 }' 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 90036 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 90036 ']' 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 90036 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90036 00:20:29.774 killing process with pid 90036 00:20:29.774 Received shutdown signal, test time was about 60.000000 seconds 00:20:29.774 00:20:29.774 Latency(us) 00:20:29.774 [2024-11-26T19:09:56.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.774 [2024-11-26T19:09:56.397Z] =================================================================================================================== 00:20:29.774 [2024-11-26T19:09:56.397Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90036' 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 90036 00:20:29.774 [2024-11-26 19:09:56.382643] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:29.774 19:09:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 90036 00:20:29.774 [2024-11-26 19:09:56.382865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:29.774 [2024-11-26 19:09:56.382939] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:29.774 [2024-11-26 19:09:56.382959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:30.340 [2024-11-26 19:09:56.683009] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:31.274 ************************************ 00:20:31.274 END TEST raid_rebuild_test_sb_md_interleaved 00:20:31.274 ************************************ 00:20:31.274 19:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:20:31.274 00:20:31.274 real 0m18.736s 00:20:31.274 user 0m25.368s 00:20:31.274 sys 0m1.550s 00:20:31.274 19:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:31.274 19:09:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.533 19:09:57 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:20:31.533 19:09:57 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:20:31.533 19:09:57 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 90036 ']' 00:20:31.533 19:09:57 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 90036 00:20:31.533 19:09:57 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:20:31.533 ************************************ 00:20:31.533 END TEST bdev_raid 00:20:31.533 ************************************ 00:20:31.533 00:20:31.533 real 13m25.632s 00:20:31.533 user 18m48.096s 00:20:31.533 sys 1m54.577s 00:20:31.533 19:09:57 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:31.533 19:09:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:31.533 19:09:58 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:31.533 19:09:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:31.533 19:09:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:31.533 19:09:58 -- common/autotest_common.sh@10 -- # set +x 00:20:31.533 ************************************ 00:20:31.533 START TEST spdkcli_raid 00:20:31.533 ************************************ 00:20:31.533 19:09:58 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:31.533 * Looking for test storage... 00:20:31.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:31.533 19:09:58 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:31.533 19:09:58 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:20:31.533 19:09:58 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:31.792 19:09:58 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:31.792 19:09:58 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:20:31.792 19:09:58 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:31.792 19:09:58 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:31.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.792 --rc genhtml_branch_coverage=1 00:20:31.792 --rc genhtml_function_coverage=1 00:20:31.792 --rc genhtml_legend=1 00:20:31.792 --rc geninfo_all_blocks=1 00:20:31.792 --rc geninfo_unexecuted_blocks=1 00:20:31.792 00:20:31.792 ' 00:20:31.792 19:09:58 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:31.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.792 --rc genhtml_branch_coverage=1 00:20:31.792 --rc genhtml_function_coverage=1 00:20:31.792 --rc genhtml_legend=1 00:20:31.792 --rc geninfo_all_blocks=1 00:20:31.792 --rc geninfo_unexecuted_blocks=1 00:20:31.792 00:20:31.792 ' 00:20:31.792 19:09:58 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:31.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.792 --rc genhtml_branch_coverage=1 00:20:31.792 --rc genhtml_function_coverage=1 00:20:31.792 --rc genhtml_legend=1 00:20:31.792 --rc geninfo_all_blocks=1 00:20:31.792 --rc geninfo_unexecuted_blocks=1 00:20:31.792 00:20:31.792 ' 00:20:31.792 19:09:58 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:31.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.792 --rc genhtml_branch_coverage=1 00:20:31.792 --rc genhtml_function_coverage=1 00:20:31.792 --rc genhtml_legend=1 00:20:31.792 --rc geninfo_all_blocks=1 00:20:31.792 --rc geninfo_unexecuted_blocks=1 00:20:31.792 00:20:31.792 ' 00:20:31.792 19:09:58 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:31.792 19:09:58 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:31.792 19:09:58 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:31.792 19:09:58 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:31.792 19:09:58 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:31.792 19:09:58 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:31.792 19:09:58 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:31.792 19:09:58 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:31.792 19:09:58 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:31.792 19:09:58 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:31.792 19:09:58 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:31.792 19:09:58 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:31.792 19:09:58 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:31.792 19:09:58 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:31.792 19:09:58 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:31.792 19:09:58 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:31.792 19:09:58 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:31.792 19:09:58 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:31.792 19:09:58 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:31.792 19:09:58 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:31.792 19:09:58 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:31.792 19:09:58 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:31.792 19:09:58 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:31.792 19:09:58 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:20:31.792 19:09:58 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:20:31.792 19:09:58 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:31.792 19:09:58 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:31.792 19:09:58 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:31.792 19:09:58 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:31.792 19:09:58 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:31.792 19:09:58 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:31.792 19:09:58 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:20:31.792 19:09:58 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:20:31.792 19:09:58 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:31.792 19:09:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:31.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.792 19:09:58 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:20:31.792 19:09:58 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90720 00:20:31.792 19:09:58 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:20:31.792 19:09:58 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90720 00:20:31.792 19:09:58 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90720 ']' 00:20:31.792 19:09:58 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.792 19:09:58 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:31.792 19:09:58 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.792 19:09:58 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:31.792 19:09:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:31.792 [2024-11-26 19:09:58.395897] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:20:31.792 [2024-11-26 19:09:58.396271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90720 ] 00:20:32.051 [2024-11-26 19:09:58.586135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:32.309 [2024-11-26 19:09:58.742812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.309 [2024-11-26 19:09:58.742826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.273 19:09:59 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:33.273 19:09:59 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:20:33.273 19:09:59 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:20:33.273 19:09:59 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:33.273 19:09:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:33.273 19:09:59 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:20:33.273 19:09:59 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:33.273 19:09:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:33.273 19:09:59 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:20:33.273 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:20:33.273 ' 00:20:35.252 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:20:35.252 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:20:35.252 19:10:01 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:20:35.252 19:10:01 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:35.252 19:10:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:35.252 19:10:01 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:20:35.252 19:10:01 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:35.252 19:10:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:35.252 19:10:01 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:20:35.252 ' 00:20:36.237 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:20:36.237 19:10:02 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:20:36.237 19:10:02 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:36.237 19:10:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:36.237 19:10:02 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:20:36.237 19:10:02 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:36.237 19:10:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:36.237 19:10:02 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:20:36.237 19:10:02 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:20:36.802 19:10:03 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:20:36.802 19:10:03 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:20:36.802 19:10:03 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:20:36.802 19:10:03 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:36.802 19:10:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:36.802 19:10:03 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:20:36.802 19:10:03 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:36.802 19:10:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:36.802 19:10:03 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:20:36.802 ' 00:20:38.177 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:20:38.177 19:10:04 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:20:38.177 19:10:04 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.177 19:10:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:38.177 19:10:04 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:20:38.177 19:10:04 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:38.177 19:10:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:38.177 19:10:04 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:20:38.177 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:20:38.177 ' 00:20:39.550 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:20:39.550 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:20:39.550 19:10:06 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:20:39.550 19:10:06 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:39.550 19:10:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:39.550 19:10:06 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90720 00:20:39.550 19:10:06 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90720 ']' 00:20:39.550 19:10:06 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90720 00:20:39.550 19:10:06 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:20:39.808 19:10:06 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.808 19:10:06 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90720 00:20:39.808 19:10:06 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:39.808 19:10:06 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:39.808 19:10:06 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90720' 00:20:39.808 killing process with pid 90720 00:20:39.808 19:10:06 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90720 00:20:39.808 19:10:06 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90720 00:20:42.358 19:10:08 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:20:42.358 Process with pid 90720 is not found 00:20:42.358 19:10:08 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90720 ']' 00:20:42.358 19:10:08 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90720 00:20:42.358 19:10:08 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90720 ']' 00:20:42.358 19:10:08 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90720 00:20:42.358 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90720) - No such process 00:20:42.358 19:10:08 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90720 is not found' 00:20:42.358 19:10:08 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:20:42.358 19:10:08 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:20:42.358 19:10:08 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:20:42.358 19:10:08 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:20:42.358 00:20:42.358 real 0m10.620s 00:20:42.358 user 0m21.836s 00:20:42.358 sys 0m1.254s 00:20:42.358 19:10:08 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.358 19:10:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:42.358 ************************************ 00:20:42.358 END TEST spdkcli_raid 00:20:42.358 ************************************ 00:20:42.358 19:10:08 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:42.358 19:10:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:42.358 19:10:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.358 19:10:08 -- common/autotest_common.sh@10 -- # set +x 00:20:42.358 ************************************ 00:20:42.358 START TEST blockdev_raid5f 00:20:42.358 ************************************ 00:20:42.358 19:10:08 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:42.358 * Looking for test storage... 00:20:42.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:42.358 19:10:08 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:42.358 19:10:08 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:20:42.358 19:10:08 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:42.358 19:10:08 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.358 19:10:08 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:20:42.358 19:10:08 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.358 19:10:08 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:42.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.358 --rc genhtml_branch_coverage=1 00:20:42.358 --rc genhtml_function_coverage=1 00:20:42.358 --rc genhtml_legend=1 00:20:42.358 --rc geninfo_all_blocks=1 00:20:42.358 --rc geninfo_unexecuted_blocks=1 00:20:42.358 00:20:42.358 ' 00:20:42.358 19:10:08 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:42.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.358 --rc genhtml_branch_coverage=1 00:20:42.358 --rc genhtml_function_coverage=1 00:20:42.358 --rc genhtml_legend=1 00:20:42.358 --rc geninfo_all_blocks=1 00:20:42.358 --rc geninfo_unexecuted_blocks=1 00:20:42.358 00:20:42.358 ' 00:20:42.358 19:10:08 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:42.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.358 --rc genhtml_branch_coverage=1 00:20:42.358 --rc genhtml_function_coverage=1 00:20:42.358 --rc genhtml_legend=1 00:20:42.358 --rc geninfo_all_blocks=1 00:20:42.358 --rc geninfo_unexecuted_blocks=1 00:20:42.358 00:20:42.358 ' 00:20:42.358 19:10:08 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:42.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.358 --rc genhtml_branch_coverage=1 00:20:42.358 --rc genhtml_function_coverage=1 00:20:42.358 --rc genhtml_legend=1 00:20:42.358 --rc geninfo_all_blocks=1 00:20:42.358 --rc geninfo_unexecuted_blocks=1 00:20:42.358 00:20:42.358 ' 00:20:42.358 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:42.358 19:10:08 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:20:42.358 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:42.358 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:42.358 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:42.358 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:42.359 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:42.359 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:42.359 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:20:42.359 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:20:42.359 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:20:42.359 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:20:42.359 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:20:42.359 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:20:42.359 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:20:42.359 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:20:42.359 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:20:42.359 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:20:42.359 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:20:42.359 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:20:42.359 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:20:42.359 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:20:42.359 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:20:42.359 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:20:42.359 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90995 00:20:42.359 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:42.359 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90995 00:20:42.359 19:10:08 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:42.359 19:10:08 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90995 ']' 00:20:42.359 19:10:08 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.359 19:10:08 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.359 19:10:08 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.359 19:10:08 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.359 19:10:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:42.617 [2024-11-26 19:10:09.039712] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:20:42.618 [2024-11-26 19:10:09.040072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90995 ] 00:20:42.875 [2024-11-26 19:10:09.294249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.875 [2024-11-26 19:10:09.473203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.252 19:10:10 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.252 19:10:10 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:20:44.252 19:10:10 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:20:44.252 19:10:10 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:20:44.252 19:10:10 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:20:44.252 19:10:10 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.252 19:10:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:44.252 Malloc0 00:20:44.252 Malloc1 00:20:44.252 Malloc2 00:20:44.252 19:10:10 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.252 19:10:10 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:20:44.252 19:10:10 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.252 19:10:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:44.252 19:10:10 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.252 19:10:10 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:20:44.253 19:10:10 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:20:44.253 19:10:10 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.253 19:10:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:44.253 19:10:10 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.253 19:10:10 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:20:44.253 19:10:10 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.253 19:10:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:44.253 19:10:10 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.253 19:10:10 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:44.253 19:10:10 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.253 19:10:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:44.253 19:10:10 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.253 19:10:10 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:20:44.253 19:10:10 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:20:44.253 19:10:10 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:20:44.253 19:10:10 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.253 19:10:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:44.253 19:10:10 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.253 19:10:10 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:20:44.253 19:10:10 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:20:44.253 19:10:10 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "441061b8-1187-492c-b40d-d0aed232dd5b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "441061b8-1187-492c-b40d-d0aed232dd5b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "441061b8-1187-492c-b40d-d0aed232dd5b",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "de068570-be5a-4efe-84fa-c4cace18f076",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "71169569-f292-4d95-8846-970563899fa3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "f8749603-dcdc-4a98-b3cf-d59a23348f9a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:44.511 19:10:10 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:20:44.511 19:10:10 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:20:44.511 19:10:10 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:20:44.511 19:10:10 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90995 00:20:44.511 19:10:10 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90995 ']' 00:20:44.511 19:10:10 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90995 00:20:44.511 19:10:10 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:20:44.511 19:10:10 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.511 19:10:10 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90995 00:20:44.511 killing process with pid 90995 00:20:44.511 19:10:10 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:44.511 19:10:10 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:44.511 19:10:10 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90995' 00:20:44.512 19:10:10 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90995 00:20:44.512 19:10:10 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90995 00:20:47.793 19:10:13 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:47.793 19:10:13 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:47.793 19:10:13 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:47.793 19:10:13 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:47.793 19:10:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:47.793 ************************************ 00:20:47.793 START TEST bdev_hello_world 00:20:47.793 ************************************ 00:20:47.793 19:10:13 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:47.793 [2024-11-26 19:10:13.798871] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:20:47.794 [2024-11-26 19:10:13.799081] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91068 ] 00:20:47.794 [2024-11-26 19:10:13.989330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.794 [2024-11-26 19:10:14.191434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.359 [2024-11-26 19:10:14.846511] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:48.359 [2024-11-26 19:10:14.846605] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:20:48.359 [2024-11-26 19:10:14.846631] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:48.359 [2024-11-26 19:10:14.847292] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:48.359 [2024-11-26 19:10:14.847471] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:48.359 [2024-11-26 19:10:14.847500] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:48.359 [2024-11-26 19:10:14.847572] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:48.359 00:20:48.359 [2024-11-26 19:10:14.847601] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:49.733 00:20:49.733 real 0m2.660s 00:20:49.733 user 0m2.126s 00:20:49.733 sys 0m0.405s 00:20:49.733 ************************************ 00:20:49.733 END TEST bdev_hello_world 00:20:49.733 19:10:16 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:49.733 19:10:16 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:49.733 ************************************ 00:20:49.991 19:10:16 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:20:49.991 19:10:16 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:49.991 19:10:16 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:49.991 19:10:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:49.991 ************************************ 00:20:49.991 START TEST bdev_bounds 00:20:49.991 ************************************ 00:20:49.991 19:10:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:20:49.991 19:10:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=91110 00:20:49.991 Process bdevio pid: 91110 00:20:49.991 19:10:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:49.991 19:10:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:49.991 19:10:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 91110' 00:20:49.991 19:10:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 91110 00:20:49.991 19:10:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 91110 ']' 00:20:49.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.991 19:10:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.991 19:10:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.991 19:10:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.991 19:10:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.991 19:10:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:49.991 [2024-11-26 19:10:16.534583] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:20:49.991 [2024-11-26 19:10:16.535042] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91110 ] 00:20:50.248 [2024-11-26 19:10:16.731095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:50.506 [2024-11-26 19:10:16.893244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.506 [2024-11-26 19:10:16.893347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.506 [2024-11-26 19:10:16.893363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.070 19:10:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.070 19:10:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:20:51.070 19:10:17 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:51.328 I/O targets: 00:20:51.328 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:20:51.328 00:20:51.328 00:20:51.328 CUnit - A unit testing framework for C - Version 2.1-3 00:20:51.328 http://cunit.sourceforge.net/ 00:20:51.328 00:20:51.328 00:20:51.328 Suite: bdevio tests on: raid5f 00:20:51.328 Test: blockdev write read block ...passed 00:20:51.328 Test: blockdev write zeroes read block ...passed 00:20:51.328 Test: blockdev write zeroes read no split ...passed 00:20:51.328 Test: blockdev write zeroes read split ...passed 00:20:51.587 Test: blockdev write zeroes read split partial ...passed 00:20:51.587 Test: blockdev reset ...passed 00:20:51.587 Test: blockdev write read 8 blocks ...passed 00:20:51.587 Test: blockdev write read size > 128k ...passed 00:20:51.587 Test: blockdev write read invalid size ...passed 00:20:51.587 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:51.587 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:51.587 Test: blockdev write read max offset ...passed 00:20:51.587 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:51.587 Test: blockdev writev readv 8 blocks ...passed 00:20:51.587 Test: blockdev writev readv 30 x 1block ...passed 00:20:51.587 Test: blockdev writev readv block ...passed 00:20:51.587 Test: blockdev writev readv size > 128k ...passed 00:20:51.587 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:51.587 Test: blockdev comparev and writev ...passed 00:20:51.587 Test: blockdev nvme passthru rw ...passed 00:20:51.587 Test: blockdev nvme passthru vendor specific ...passed 00:20:51.587 Test: blockdev nvme admin passthru ...passed 00:20:51.587 Test: blockdev copy ...passed 00:20:51.587 00:20:51.587 Run Summary: Type Total Ran Passed Failed Inactive 00:20:51.587 suites 1 1 n/a 0 0 00:20:51.587 tests 23 23 23 0 0 00:20:51.587 asserts 130 130 130 0 n/a 00:20:51.587 00:20:51.587 Elapsed time = 0.629 seconds 00:20:51.587 0 00:20:51.587 19:10:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 91110 00:20:51.587 19:10:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 91110 ']' 00:20:51.587 19:10:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 91110 00:20:51.587 19:10:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:20:51.587 19:10:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.587 19:10:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91110 00:20:51.587 19:10:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:51.587 19:10:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:51.587 19:10:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91110' 00:20:51.587 killing process with pid 91110 00:20:51.587 19:10:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 91110 00:20:51.587 19:10:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 91110 00:20:52.961 19:10:19 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:52.961 00:20:52.961 real 0m3.126s 00:20:52.961 user 0m7.673s 00:20:52.961 sys 0m0.551s 00:20:52.961 19:10:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.961 19:10:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:52.961 ************************************ 00:20:52.961 END TEST bdev_bounds 00:20:52.961 ************************************ 00:20:52.961 19:10:19 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:52.961 19:10:19 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:52.961 19:10:19 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.961 19:10:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:52.961 ************************************ 00:20:52.961 START TEST bdev_nbd 00:20:52.961 ************************************ 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=91181 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 91181 /var/tmp/spdk-nbd.sock 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 91181 ']' 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:52.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.961 19:10:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:53.220 [2024-11-26 19:10:19.676322] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:20:53.220 [2024-11-26 19:10:19.676487] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.478 [2024-11-26 19:10:19.848997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.478 [2024-11-26 19:10:20.002576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.414 19:10:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.414 19:10:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:20:54.414 19:10:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:20:54.414 19:10:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:54.414 19:10:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:20:54.414 19:10:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:54.414 19:10:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:20:54.414 19:10:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:54.414 19:10:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:20:54.414 19:10:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:54.414 19:10:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:54.414 19:10:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:54.414 19:10:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:54.414 19:10:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:54.414 19:10:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:20:54.414 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:54.414 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:54.672 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:54.672 19:10:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:54.672 19:10:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:54.672 19:10:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:54.672 19:10:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:54.672 19:10:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:54.672 19:10:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:54.672 19:10:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:54.672 19:10:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:54.672 19:10:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:54.672 1+0 records in 00:20:54.672 1+0 records out 00:20:54.672 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314883 s, 13.0 MB/s 00:20:54.672 19:10:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:54.672 19:10:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:54.672 19:10:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:54.672 19:10:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:54.672 19:10:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:54.672 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:54.672 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:54.672 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:54.930 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:54.930 { 00:20:54.930 "nbd_device": "/dev/nbd0", 00:20:54.930 "bdev_name": "raid5f" 00:20:54.930 } 00:20:54.930 ]' 00:20:54.930 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:54.930 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:54.930 { 00:20:54.930 "nbd_device": "/dev/nbd0", 00:20:54.930 "bdev_name": "raid5f" 00:20:54.930 } 00:20:54.930 ]' 00:20:54.930 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:54.930 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:54.930 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:54.930 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:54.930 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:54.930 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:54.930 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:54.930 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:55.189 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:55.189 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:55.189 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:55.189 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:55.189 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:55.189 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:55.189 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:55.189 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:55.189 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:55.189 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:55.189 19:10:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:55.449 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:55.449 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:55.449 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:55.707 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:20:55.966 /dev/nbd0 00:20:55.966 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:55.966 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:55.966 19:10:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:55.966 19:10:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:55.966 19:10:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:55.966 19:10:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:55.966 19:10:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:55.966 19:10:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:55.966 19:10:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:55.966 19:10:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:55.966 19:10:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:55.966 1+0 records in 00:20:55.966 1+0 records out 00:20:55.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252427 s, 16.2 MB/s 00:20:55.966 19:10:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:55.966 19:10:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:55.966 19:10:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:55.966 19:10:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:55.966 19:10:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:55.966 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:55.966 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:55.966 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:55.966 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:55.966 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:56.250 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:56.250 { 00:20:56.250 "nbd_device": "/dev/nbd0", 00:20:56.250 "bdev_name": "raid5f" 00:20:56.250 } 00:20:56.250 ]' 00:20:56.250 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:56.250 { 00:20:56.250 "nbd_device": "/dev/nbd0", 00:20:56.250 "bdev_name": "raid5f" 00:20:56.250 } 00:20:56.250 ]' 00:20:56.250 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:56.250 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:20:56.250 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:20:56.250 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:56.250 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:20:56.250 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:20:56.250 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:20:56.250 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:20:56.250 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:20:56.250 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:56.250 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:56.250 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:56.250 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:56.250 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:56.250 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:56.250 256+0 records in 00:20:56.250 256+0 records out 00:20:56.250 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00722024 s, 145 MB/s 00:20:56.250 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:56.250 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:56.509 256+0 records in 00:20:56.509 256+0 records out 00:20:56.509 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0414947 s, 25.3 MB/s 00:20:56.509 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:20:56.509 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:56.509 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:56.509 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:56.509 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:56.509 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:56.509 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:56.509 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:56.509 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:56.509 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:56.509 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:56.509 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:56.509 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:56.509 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:56.509 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:56.509 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:56.509 19:10:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:56.768 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:56.768 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:56.768 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:56.768 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:56.768 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:56.768 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:56.768 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:56.768 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:56.768 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:56.768 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:56.768 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:57.027 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:57.027 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:57.027 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:57.027 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:57.027 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:57.027 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:57.027 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:57.027 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:57.027 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:57.027 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:57.027 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:57.027 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:57.027 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:57.027 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:57.027 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:57.027 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:57.593 malloc_lvol_verify 00:20:57.593 19:10:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:57.851 19da4e87-a4ae-4083-8b19-8caf353a0604 00:20:57.851 19:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:58.108 798387d0-34ec-4aec-adcb-3dd1df403224 00:20:58.108 19:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:58.366 /dev/nbd0 00:20:58.366 19:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:58.366 19:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:58.366 19:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:58.366 19:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:58.366 19:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:58.366 mke2fs 1.47.0 (5-Feb-2023) 00:20:58.366 Discarding device blocks: 0/4096 done 00:20:58.366 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:58.366 00:20:58.366 Allocating group tables: 0/1 done 00:20:58.366 Writing inode tables: 0/1 done 00:20:58.366 Creating journal (1024 blocks): done 00:20:58.366 Writing superblocks and filesystem accounting information: 0/1 done 00:20:58.366 00:20:58.366 19:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:58.366 19:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:58.366 19:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:58.366 19:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:58.366 19:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:58.366 19:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:58.366 19:10:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:58.625 19:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:58.912 19:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:58.912 19:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:58.912 19:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:58.913 19:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:58.913 19:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:58.913 19:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:58.913 19:10:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:58.913 19:10:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 91181 00:20:58.913 19:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 91181 ']' 00:20:58.913 19:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 91181 00:20:58.913 19:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:20:58.913 19:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.913 19:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91181 00:20:58.913 19:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:58.913 19:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:58.913 killing process with pid 91181 00:20:58.913 19:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91181' 00:20:58.913 19:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 91181 00:20:58.913 19:10:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 91181 00:21:00.287 19:10:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:21:00.287 00:21:00.287 real 0m7.243s 00:21:00.287 user 0m10.464s 00:21:00.287 sys 0m1.550s 00:21:00.287 19:10:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.288 19:10:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:00.288 ************************************ 00:21:00.288 END TEST bdev_nbd 00:21:00.288 ************************************ 00:21:00.288 19:10:26 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:21:00.288 19:10:26 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:21:00.288 19:10:26 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:21:00.288 19:10:26 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:21:00.288 19:10:26 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:00.288 19:10:26 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.288 19:10:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:00.288 ************************************ 00:21:00.288 START TEST bdev_fio 00:21:00.288 ************************************ 00:21:00.288 19:10:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:21:00.288 19:10:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:21:00.288 19:10:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:21:00.288 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:21:00.288 19:10:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:21:00.288 19:10:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:21:00.288 19:10:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:21:00.288 19:10:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:21:00.288 19:10:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:21:00.288 19:10:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:00.288 19:10:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:21:00.288 19:10:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:21:00.288 19:10:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:00.288 19:10:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:00.288 19:10:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:00.288 19:10:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:21:00.288 19:10:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:00.288 19:10:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:00.288 19:10:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:00.288 19:10:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:21:00.288 19:10:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:21:00.288 19:10:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:21:00.288 19:10:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:21:00.547 19:10:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:21:00.547 19:10:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:21:00.547 19:10:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:00.547 19:10:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:21:00.547 19:10:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:21:00.547 19:10:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:21:00.547 19:10:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:00.547 19:10:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:21:00.547 19:10:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.547 19:10:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:00.547 ************************************ 00:21:00.547 START TEST bdev_fio_rw_verify 00:21:00.547 ************************************ 00:21:00.547 19:10:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:00.547 19:10:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:00.547 19:10:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:00.547 19:10:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:00.547 19:10:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:00.547 19:10:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:00.547 19:10:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:21:00.547 19:10:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:00.547 19:10:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:00.547 19:10:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:00.547 19:10:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:21:00.547 19:10:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:00.547 19:10:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:00.547 19:10:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:00.547 19:10:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:21:00.547 19:10:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:00.547 19:10:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:00.806 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:00.806 fio-3.35 00:21:00.806 Starting 1 thread 00:21:13.012 00:21:13.012 job_raid5f: (groupid=0, jobs=1): err= 0: pid=91393: Tue Nov 26 19:10:38 2024 00:21:13.012 read: IOPS=8089, BW=31.6MiB/s (33.1MB/s)(316MiB/10001msec) 00:21:13.012 slat (usec): min=24, max=105, avg=31.46, stdev= 5.07 00:21:13.012 clat (usec): min=15, max=579, avg=198.35, stdev=75.44 00:21:13.012 lat (usec): min=47, max=652, avg=229.81, stdev=76.42 00:21:13.012 clat percentiles (usec): 00:21:13.012 | 50.000th=[ 202], 99.000th=[ 363], 99.900th=[ 420], 99.990th=[ 502], 00:21:13.012 | 99.999th=[ 578] 00:21:13.012 write: IOPS=8460, BW=33.0MiB/s (34.7MB/s)(327MiB/9891msec); 0 zone resets 00:21:13.012 slat (usec): min=12, max=233, avg=24.07, stdev= 5.92 00:21:13.012 clat (usec): min=85, max=1364, avg=452.81, stdev=65.75 00:21:13.012 lat (usec): min=106, max=1531, avg=476.89, stdev=67.86 00:21:13.012 clat percentiles (usec): 00:21:13.012 | 50.000th=[ 457], 99.000th=[ 644], 99.900th=[ 1004], 99.990th=[ 1237], 00:21:13.012 | 99.999th=[ 1369] 00:21:13.012 bw ( KiB/s): min=30944, max=35288, per=98.93%, avg=33479.16, stdev=1370.85, samples=19 00:21:13.012 iops : min= 7736, max= 8822, avg=8369.79, stdev=342.71, samples=19 00:21:13.012 lat (usec) : 20=0.01%, 100=5.55%, 250=29.41%, 500=56.39%, 750=8.54% 00:21:13.012 lat (usec) : 1000=0.06% 00:21:13.012 lat (msec) : 2=0.05% 00:21:13.012 cpu : usr=98.68%, sys=0.44%, ctx=26, majf=0, minf=7082 00:21:13.012 IO depths : 1=7.8%, 2=19.9%, 4=55.1%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:13.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.012 complete : 0=0.0%, 4=90.1%, 8=9.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.012 issued rwts: total=80904,83684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:13.012 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:13.012 00:21:13.012 Run status group 0 (all jobs): 00:21:13.012 READ: bw=31.6MiB/s (33.1MB/s), 31.6MiB/s-31.6MiB/s (33.1MB/s-33.1MB/s), io=316MiB (331MB), run=10001-10001msec 00:21:13.012 WRITE: bw=33.0MiB/s (34.7MB/s), 33.0MiB/s-33.0MiB/s (34.7MB/s-34.7MB/s), io=327MiB (343MB), run=9891-9891msec 00:21:13.593 ----------------------------------------------------- 00:21:13.593 Suppressions used: 00:21:13.593 count bytes template 00:21:13.593 1 7 /usr/src/fio/parse.c 00:21:13.593 266 25536 /usr/src/fio/iolog.c 00:21:13.593 1 8 libtcmalloc_minimal.so 00:21:13.593 1 904 libcrypto.so 00:21:13.593 ----------------------------------------------------- 00:21:13.593 00:21:13.593 00:21:13.593 real 0m13.170s 00:21:13.593 user 0m13.460s 00:21:13.593 sys 0m0.923s 00:21:13.593 19:10:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:13.593 19:10:40 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:21:13.593 ************************************ 00:21:13.593 END TEST bdev_fio_rw_verify 00:21:13.593 ************************************ 00:21:13.593 19:10:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:21:13.593 19:10:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:13.593 19:10:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:21:13.593 19:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:13.593 19:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:21:13.593 19:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:21:13.593 19:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:13.593 19:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:13.593 19:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:13.593 19:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:21:13.593 19:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:13.593 19:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:13.856 19:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:13.856 19:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:21:13.856 19:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:21:13.856 19:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:21:13.856 19:10:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "441061b8-1187-492c-b40d-d0aed232dd5b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "441061b8-1187-492c-b40d-d0aed232dd5b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "441061b8-1187-492c-b40d-d0aed232dd5b",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "de068570-be5a-4efe-84fa-c4cace18f076",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "71169569-f292-4d95-8846-970563899fa3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "f8749603-dcdc-4a98-b3cf-d59a23348f9a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:13.856 19:10:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:21:13.856 19:10:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:21:13.856 19:10:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:13.856 /home/vagrant/spdk_repo/spdk 00:21:13.856 19:10:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:21:13.856 19:10:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:21:13.856 19:10:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:21:13.856 00:21:13.856 real 0m13.409s 00:21:13.856 user 0m13.569s 00:21:13.856 sys 0m1.014s 00:21:13.856 19:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:13.856 19:10:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:13.856 ************************************ 00:21:13.856 END TEST bdev_fio 00:21:13.856 ************************************ 00:21:13.856 19:10:40 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:13.856 19:10:40 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:13.856 19:10:40 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:13.856 19:10:40 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:13.856 19:10:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:13.856 ************************************ 00:21:13.856 START TEST bdev_verify 00:21:13.856 ************************************ 00:21:13.856 19:10:40 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:13.856 [2024-11-26 19:10:40.430330] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:21:13.856 [2024-11-26 19:10:40.430502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91555 ] 00:21:14.114 [2024-11-26 19:10:40.615935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:14.373 [2024-11-26 19:10:40.795464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.373 [2024-11-26 19:10:40.795477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.941 Running I/O for 5 seconds... 00:21:16.816 10158.00 IOPS, 39.68 MiB/s [2024-11-26T19:10:44.816Z] 11124.50 IOPS, 43.46 MiB/s [2024-11-26T19:10:45.752Z] 11710.33 IOPS, 45.74 MiB/s [2024-11-26T19:10:46.690Z] 12059.75 IOPS, 47.11 MiB/s [2024-11-26T19:10:46.690Z] 12088.60 IOPS, 47.22 MiB/s 00:21:20.067 Latency(us) 00:21:20.067 [2024-11-26T19:10:46.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.067 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:20.067 Verification LBA range: start 0x0 length 0x2000 00:21:20.067 raid5f : 5.01 6021.42 23.52 0.00 0.00 32051.09 301.61 27048.49 00:21:20.067 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:20.067 Verification LBA range: start 0x2000 length 0x2000 00:21:20.067 raid5f : 5.01 6042.31 23.60 0.00 0.00 31861.54 2144.81 27048.49 00:21:20.067 [2024-11-26T19:10:46.690Z] =================================================================================================================== 00:21:20.067 [2024-11-26T19:10:46.690Z] Total : 12063.72 47.12 0.00 0.00 31956.14 301.61 27048.49 00:21:21.444 00:21:21.444 real 0m7.600s 00:21:21.444 user 0m13.806s 00:21:21.444 sys 0m0.399s 00:21:21.444 19:10:47 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.444 19:10:47 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:21.444 ************************************ 00:21:21.444 END TEST bdev_verify 00:21:21.444 ************************************ 00:21:21.444 19:10:47 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:21.444 19:10:47 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:21.444 19:10:47 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:21.444 19:10:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:21.444 ************************************ 00:21:21.444 START TEST bdev_verify_big_io 00:21:21.444 ************************************ 00:21:21.444 19:10:47 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:21.703 [2024-11-26 19:10:48.080292] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:21:21.703 [2024-11-26 19:10:48.080473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91649 ] 00:21:21.703 [2024-11-26 19:10:48.261211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:21.962 [2024-11-26 19:10:48.416156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.962 [2024-11-26 19:10:48.416157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.530 Running I/O for 5 seconds... 00:21:24.470 505.00 IOPS, 31.56 MiB/s [2024-11-26T19:10:52.468Z] 507.00 IOPS, 31.69 MiB/s [2024-11-26T19:10:53.405Z] 549.00 IOPS, 34.31 MiB/s [2024-11-26T19:10:54.350Z] 634.00 IOPS, 39.62 MiB/s [2024-11-26T19:10:54.350Z] 660.00 IOPS, 41.25 MiB/s 00:21:27.727 Latency(us) 00:21:27.727 [2024-11-26T19:10:54.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.727 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:27.727 Verification LBA range: start 0x0 length 0x200 00:21:27.727 raid5f : 5.14 345.83 21.61 0.00 0.00 9243389.95 189.91 457560.44 00:21:27.727 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:27.727 Verification LBA range: start 0x200 length 0x200 00:21:27.727 raid5f : 5.23 339.77 21.24 0.00 0.00 9316809.45 196.42 472812.45 00:21:27.727 [2024-11-26T19:10:54.350Z] =================================================================================================================== 00:21:27.727 [2024-11-26T19:10:54.350Z] Total : 685.59 42.85 0.00 0.00 9280099.70 189.91 472812.45 00:21:29.631 00:21:29.631 real 0m7.773s 00:21:29.631 user 0m14.203s 00:21:29.631 sys 0m0.373s 00:21:29.631 19:10:55 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:29.631 19:10:55 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:29.631 ************************************ 00:21:29.631 END TEST bdev_verify_big_io 00:21:29.631 ************************************ 00:21:29.631 19:10:55 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:29.631 19:10:55 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:29.631 19:10:55 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:29.631 19:10:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:29.631 ************************************ 00:21:29.631 START TEST bdev_write_zeroes 00:21:29.631 ************************************ 00:21:29.631 19:10:55 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:29.631 [2024-11-26 19:10:55.920026] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:21:29.631 [2024-11-26 19:10:55.920205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91748 ] 00:21:29.631 [2024-11-26 19:10:56.124226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.890 [2024-11-26 19:10:56.302944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.459 Running I/O for 1 seconds... 00:21:31.399 17631.00 IOPS, 68.87 MiB/s 00:21:31.399 Latency(us) 00:21:31.399 [2024-11-26T19:10:58.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.399 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:31.399 raid5f : 1.01 17617.55 68.82 0.00 0.00 7233.85 2204.39 12153.95 00:21:31.399 [2024-11-26T19:10:58.022Z] =================================================================================================================== 00:21:31.399 [2024-11-26T19:10:58.022Z] Total : 17617.55 68.82 0.00 0.00 7233.85 2204.39 12153.95 00:21:33.315 00:21:33.315 real 0m3.597s 00:21:33.315 user 0m3.076s 00:21:33.315 sys 0m0.387s 00:21:33.315 19:10:59 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.315 19:10:59 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:33.315 ************************************ 00:21:33.315 END TEST bdev_write_zeroes 00:21:33.315 ************************************ 00:21:33.315 19:10:59 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:33.315 19:10:59 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:33.315 19:10:59 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:33.315 19:10:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:33.315 ************************************ 00:21:33.315 START TEST bdev_json_nonenclosed 00:21:33.315 ************************************ 00:21:33.315 19:10:59 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:33.315 [2024-11-26 19:10:59.578008] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:21:33.315 [2024-11-26 19:10:59.578229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91807 ] 00:21:33.315 [2024-11-26 19:10:59.776470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.573 [2024-11-26 19:10:59.957567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.573 [2024-11-26 19:10:59.957708] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:33.573 [2024-11-26 19:10:59.957756] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:33.573 [2024-11-26 19:10:59.957775] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:33.832 00:21:33.832 real 0m0.795s 00:21:33.832 user 0m0.530s 00:21:33.832 sys 0m0.158s 00:21:33.832 19:11:00 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.832 19:11:00 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:33.832 ************************************ 00:21:33.832 END TEST bdev_json_nonenclosed 00:21:33.832 ************************************ 00:21:33.832 19:11:00 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:33.832 19:11:00 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:33.832 19:11:00 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:33.832 19:11:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:33.832 ************************************ 00:21:33.832 START TEST bdev_json_nonarray 00:21:33.832 ************************************ 00:21:33.832 19:11:00 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:33.832 [2024-11-26 19:11:00.437435] Starting SPDK v25.01-pre git sha1 971ec0126 / DPDK 24.03.0 initialization... 00:21:33.832 [2024-11-26 19:11:00.437613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91832 ] 00:21:34.094 [2024-11-26 19:11:00.624249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.356 [2024-11-26 19:11:00.774883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.356 [2024-11-26 19:11:00.775070] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:34.356 [2024-11-26 19:11:00.775100] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:34.356 [2024-11-26 19:11:00.775128] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:34.615 00:21:34.615 real 0m0.769s 00:21:34.615 user 0m0.499s 00:21:34.615 sys 0m0.163s 00:21:34.615 19:11:01 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.615 19:11:01 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:34.615 ************************************ 00:21:34.615 END TEST bdev_json_nonarray 00:21:34.615 ************************************ 00:21:34.615 19:11:01 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:21:34.615 19:11:01 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:21:34.615 19:11:01 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:21:34.615 19:11:01 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:21:34.615 19:11:01 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:21:34.615 19:11:01 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:34.615 19:11:01 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:34.615 19:11:01 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:21:34.615 19:11:01 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:21:34.615 19:11:01 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:21:34.615 19:11:01 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:21:34.615 00:21:34.615 real 0m52.421s 00:21:34.615 user 1m10.862s 00:21:34.615 sys 0m6.138s 00:21:34.615 ************************************ 00:21:34.615 END TEST blockdev_raid5f 00:21:34.615 ************************************ 00:21:34.615 19:11:01 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.615 19:11:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:34.615 19:11:01 -- spdk/autotest.sh@194 -- # uname -s 00:21:34.615 19:11:01 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:21:34.615 19:11:01 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:34.615 19:11:01 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:34.615 19:11:01 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:21:34.615 19:11:01 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:21:34.615 19:11:01 -- spdk/autotest.sh@260 -- # timing_exit lib 00:21:34.615 19:11:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:34.615 19:11:01 -- common/autotest_common.sh@10 -- # set +x 00:21:34.615 19:11:01 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:21:34.615 19:11:01 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:21:34.615 19:11:01 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:21:34.616 19:11:01 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:34.616 19:11:01 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:34.616 19:11:01 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:34.616 19:11:01 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:34.616 19:11:01 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:34.616 19:11:01 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:34.616 19:11:01 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:34.616 19:11:01 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:34.616 19:11:01 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:21:34.616 19:11:01 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:34.616 19:11:01 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:21:34.616 19:11:01 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:34.616 19:11:01 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:34.616 19:11:01 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:21:34.616 19:11:01 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:21:34.616 19:11:01 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:21:34.616 19:11:01 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:21:34.616 19:11:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:34.616 19:11:01 -- common/autotest_common.sh@10 -- # set +x 00:21:34.616 19:11:01 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:21:34.616 19:11:01 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:21:34.616 19:11:01 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:21:34.616 19:11:01 -- common/autotest_common.sh@10 -- # set +x 00:21:36.519 INFO: APP EXITING 00:21:36.519 INFO: killing all VMs 00:21:36.519 INFO: killing vhost app 00:21:36.519 INFO: EXIT DONE 00:21:36.519 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:36.519 Waiting for block devices as requested 00:21:36.778 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:36.778 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:37.713 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:37.713 Cleaning 00:21:37.713 Removing: /var/run/dpdk/spdk0/config 00:21:37.713 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:37.713 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:37.713 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:37.713 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:37.713 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:37.713 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:37.713 Removing: /dev/shm/spdk_tgt_trace.pid57042 00:21:37.713 Removing: /var/run/dpdk/spdk0 00:21:37.713 Removing: /var/run/dpdk/spdk_pid56806 00:21:37.713 Removing: /var/run/dpdk/spdk_pid57042 00:21:37.713 Removing: /var/run/dpdk/spdk_pid57277 00:21:37.713 Removing: /var/run/dpdk/spdk_pid57386 00:21:37.713 Removing: /var/run/dpdk/spdk_pid57441 00:21:37.713 Removing: /var/run/dpdk/spdk_pid57576 00:21:37.713 Removing: /var/run/dpdk/spdk_pid57599 00:21:37.713 Removing: /var/run/dpdk/spdk_pid57815 00:21:37.713 Removing: /var/run/dpdk/spdk_pid57921 00:21:37.713 Removing: /var/run/dpdk/spdk_pid58039 00:21:37.713 Removing: /var/run/dpdk/spdk_pid58161 00:21:37.713 Removing: /var/run/dpdk/spdk_pid58269 00:21:37.713 Removing: /var/run/dpdk/spdk_pid58314 00:21:37.713 Removing: /var/run/dpdk/spdk_pid58351 00:21:37.713 Removing: /var/run/dpdk/spdk_pid58427 00:21:37.713 Removing: /var/run/dpdk/spdk_pid58538 00:21:37.713 Removing: /var/run/dpdk/spdk_pid59026 00:21:37.713 Removing: /var/run/dpdk/spdk_pid59103 00:21:37.713 Removing: /var/run/dpdk/spdk_pid59188 00:21:37.713 Removing: /var/run/dpdk/spdk_pid59204 00:21:37.713 Removing: /var/run/dpdk/spdk_pid59370 00:21:37.713 Removing: /var/run/dpdk/spdk_pid59386 00:21:37.713 Removing: /var/run/dpdk/spdk_pid59546 00:21:37.713 Removing: /var/run/dpdk/spdk_pid59562 00:21:37.713 Removing: /var/run/dpdk/spdk_pid59632 00:21:37.713 Removing: /var/run/dpdk/spdk_pid59655 00:21:37.713 Removing: /var/run/dpdk/spdk_pid59719 00:21:37.713 Removing: /var/run/dpdk/spdk_pid59737 00:21:37.713 Removing: /var/run/dpdk/spdk_pid59940 00:21:37.713 Removing: /var/run/dpdk/spdk_pid59982 00:21:37.713 Removing: /var/run/dpdk/spdk_pid60071 00:21:37.713 Removing: /var/run/dpdk/spdk_pid61464 00:21:37.713 Removing: /var/run/dpdk/spdk_pid61681 00:21:37.713 Removing: /var/run/dpdk/spdk_pid61827 00:21:37.713 Removing: /var/run/dpdk/spdk_pid62488 00:21:37.713 Removing: /var/run/dpdk/spdk_pid62704 00:21:37.713 Removing: /var/run/dpdk/spdk_pid62855 00:21:37.713 Removing: /var/run/dpdk/spdk_pid63515 00:21:37.713 Removing: /var/run/dpdk/spdk_pid63850 00:21:37.713 Removing: /var/run/dpdk/spdk_pid63996 00:21:37.713 Removing: /var/run/dpdk/spdk_pid65414 00:21:37.713 Removing: /var/run/dpdk/spdk_pid65673 00:21:37.713 Removing: /var/run/dpdk/spdk_pid65824 00:21:37.713 Removing: /var/run/dpdk/spdk_pid67248 00:21:37.713 Removing: /var/run/dpdk/spdk_pid67507 00:21:37.713 Removing: /var/run/dpdk/spdk_pid67658 00:21:37.713 Removing: /var/run/dpdk/spdk_pid69071 00:21:37.713 Removing: /var/run/dpdk/spdk_pid69533 00:21:37.713 Removing: /var/run/dpdk/spdk_pid69681 00:21:37.713 Removing: /var/run/dpdk/spdk_pid71208 00:21:37.713 Removing: /var/run/dpdk/spdk_pid71474 00:21:37.713 Removing: /var/run/dpdk/spdk_pid71625 00:21:37.713 Removing: /var/run/dpdk/spdk_pid73145 00:21:37.713 Removing: /var/run/dpdk/spdk_pid73410 00:21:37.713 Removing: /var/run/dpdk/spdk_pid73561 00:21:37.713 Removing: /var/run/dpdk/spdk_pid75083 00:21:37.713 Removing: /var/run/dpdk/spdk_pid75582 00:21:37.713 Removing: /var/run/dpdk/spdk_pid75734 00:21:37.713 Removing: /var/run/dpdk/spdk_pid75877 00:21:37.713 Removing: /var/run/dpdk/spdk_pid76325 00:21:37.713 Removing: /var/run/dpdk/spdk_pid77087 00:21:37.713 Removing: /var/run/dpdk/spdk_pid77491 00:21:37.713 Removing: /var/run/dpdk/spdk_pid78197 00:21:37.713 Removing: /var/run/dpdk/spdk_pid78690 00:21:37.713 Removing: /var/run/dpdk/spdk_pid79501 00:21:37.713 Removing: /var/run/dpdk/spdk_pid79959 00:21:37.713 Removing: /var/run/dpdk/spdk_pid82000 00:21:37.713 Removing: /var/run/dpdk/spdk_pid82454 00:21:37.713 Removing: /var/run/dpdk/spdk_pid82896 00:21:37.713 Removing: /var/run/dpdk/spdk_pid85031 00:21:37.713 Removing: /var/run/dpdk/spdk_pid85528 00:21:37.713 Removing: /var/run/dpdk/spdk_pid86043 00:21:37.713 Removing: /var/run/dpdk/spdk_pid87124 00:21:37.713 Removing: /var/run/dpdk/spdk_pid87453 00:21:37.713 Removing: /var/run/dpdk/spdk_pid88414 00:21:37.713 Removing: /var/run/dpdk/spdk_pid88748 00:21:37.713 Removing: /var/run/dpdk/spdk_pid89703 00:21:37.713 Removing: /var/run/dpdk/spdk_pid90036 00:21:37.713 Removing: /var/run/dpdk/spdk_pid90720 00:21:37.971 Removing: /var/run/dpdk/spdk_pid90995 00:21:37.971 Removing: /var/run/dpdk/spdk_pid91068 00:21:37.971 Removing: /var/run/dpdk/spdk_pid91110 00:21:37.971 Removing: /var/run/dpdk/spdk_pid91378 00:21:37.971 Removing: /var/run/dpdk/spdk_pid91555 00:21:37.971 Removing: /var/run/dpdk/spdk_pid91649 00:21:37.971 Removing: /var/run/dpdk/spdk_pid91748 00:21:37.971 Removing: /var/run/dpdk/spdk_pid91807 00:21:37.971 Removing: /var/run/dpdk/spdk_pid91832 00:21:37.971 Clean 00:21:37.971 19:11:04 -- common/autotest_common.sh@1453 -- # return 0 00:21:37.971 19:11:04 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:21:37.971 19:11:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:37.971 19:11:04 -- common/autotest_common.sh@10 -- # set +x 00:21:37.971 19:11:04 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:21:37.971 19:11:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:37.971 19:11:04 -- common/autotest_common.sh@10 -- # set +x 00:21:37.971 19:11:04 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:37.971 19:11:04 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:37.971 19:11:04 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:37.971 19:11:04 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:21:37.971 19:11:04 -- spdk/autotest.sh@398 -- # hostname 00:21:37.971 19:11:04 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:38.228 geninfo: WARNING: invalid characters removed from testname! 00:22:04.760 19:11:30 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:08.948 19:11:34 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:11.480 19:11:37 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:14.012 19:11:40 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:17.297 19:11:43 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:19.828 19:11:46 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:23.114 19:11:49 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:23.114 19:11:49 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:23.114 19:11:49 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:23.114 19:11:49 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:23.114 19:11:49 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:23.114 19:11:49 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:23.114 + [[ -n 5363 ]] 00:22:23.114 + sudo kill 5363 00:22:23.123 [Pipeline] } 00:22:23.141 [Pipeline] // timeout 00:22:23.147 [Pipeline] } 00:22:23.165 [Pipeline] // stage 00:22:23.172 [Pipeline] } 00:22:23.188 [Pipeline] // catchError 00:22:23.200 [Pipeline] stage 00:22:23.202 [Pipeline] { (Stop VM) 00:22:23.217 [Pipeline] sh 00:22:23.500 + vagrant halt 00:22:27.690 ==> default: Halting domain... 00:22:34.366 [Pipeline] sh 00:22:34.645 + vagrant destroy -f 00:22:38.832 ==> default: Removing domain... 00:22:38.845 [Pipeline] sh 00:22:39.131 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:22:39.140 [Pipeline] } 00:22:39.158 [Pipeline] // stage 00:22:39.163 [Pipeline] } 00:22:39.180 [Pipeline] // dir 00:22:39.185 [Pipeline] } 00:22:39.202 [Pipeline] // wrap 00:22:39.209 [Pipeline] } 00:22:39.223 [Pipeline] // catchError 00:22:39.234 [Pipeline] stage 00:22:39.236 [Pipeline] { (Epilogue) 00:22:39.251 [Pipeline] sh 00:22:39.546 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:46.113 [Pipeline] catchError 00:22:46.115 [Pipeline] { 00:22:46.127 [Pipeline] sh 00:22:46.409 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:46.409 Artifacts sizes are good 00:22:46.418 [Pipeline] } 00:22:46.432 [Pipeline] // catchError 00:22:46.445 [Pipeline] archiveArtifacts 00:22:46.451 Archiving artifacts 00:22:46.566 [Pipeline] cleanWs 00:22:46.577 [WS-CLEANUP] Deleting project workspace... 00:22:46.577 [WS-CLEANUP] Deferred wipeout is used... 00:22:46.584 [WS-CLEANUP] done 00:22:46.586 [Pipeline] } 00:22:46.601 [Pipeline] // stage 00:22:46.607 [Pipeline] } 00:22:46.621 [Pipeline] // node 00:22:46.627 [Pipeline] End of Pipeline 00:22:46.666 Finished: SUCCESS